Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

benchdnn-3.3.3-1.3 RPM for x86_64

From OpenSuSE Tumbleweed for x86_64

Name: benchdnn Distribution: openSUSE Tumbleweed
Version: 3.3.3 Vendor: openSUSE
Release: 1.3 Build date: Tue Dec 26 22:32:41 2023
Group: Unspecified Build host: reproducible
Size: 17870738 Source RPM: onednn-3.3.3-1.3.src.rpm
Packager: https://bugs.opensuse.org
Url: https://01.org/onednn
Summary: Header files of Intel Math Kernel Library
Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.

This package only includes the benchmark utility including its input files.

Provides

Requires

License

Apache-2.0

Changelog

* Tue Dec 26 2023 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  * Fri Dec  1 2023 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Update to 3.3.3:
  - This is a patch release containing the following changes to v3.3.2:
    * Fixed performance regression in int8 convolutions on processors with Intel AVX-512 and Intel DL Boost support (a00661f)
    * Fixed race condition during library initialization on Intel Data Center GPU Max Series (7dfcd11)
    * Fixed accuracy issue in experimental Graph Compiler with LLVM code generator (8892e7e)
    * Disabled int8 RNN implementation for cases with non-trivial strides (2195e4b)
    * Fixed incorrect results in bfloat16 convolution implementation on processors with Intel AMX support (9f00af9)
    * Fixed incorrect results in fp16 and int8 convolution on Intel Core Ultra integrated GPUs (69cef84, 79bc6cc, c9c0b09)
* Fri Dec 01 2023 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Update to 3.3.1:
  - This is a patch release containing the following changes to v3.3:
    * Fixed int8 convolution accuracy issue on Intel GPUs (09c87c7)
    * Switched internal stream to in-order mode for NVIDIA and AMD GPUs to avoid synchronization issues (db01d62)
    * Fixed runtime error for avgpool_bwd operation in Graph API (d025ef6, 9e0602a, e0dc1b3)
    * Fixed benchdnn error reporting for some Graph API cases (98dc9db)
    * Fixed accuracy issue in experimental Graph Compiler for int8 MHA variant from StarCoder model (5476ef7)
    * Fixed incorrect results for layer normalization with trivial dimensions on Intel GPUs (a2ec0a0)
    * Removed redundant synchronization for out-of-order SYCL queues (a96e9b1)
    * Fixed runtime error in experimental Graph Compiler for int8 MLP subgraph from LLAMA model (595543d)
    * Fixed SEGFAULT in experimental Graph Compiler for fp32 MLP subgraph (4207105)
    * Fixed incorrect results in experimental Graph Compiler for MLP subgraph (57e14b5)
    * Fixed the issue with f16 inner product primitive with s8 output returning unimplemented on Intel GPUs (bf12207, 800b5e9, ec7054a)
    * Fixed incorrect results for int8 deconvolution with zero-points on processors with Intel AMX instructions support (55d2cec)
* Tue Oct 10 2023 Paolo Stivanin <info@paolostivanin.com>
  - Update to 3.3:
    * 3.3: https://github.com/oneapi-src/oneDNN/releases/tag/v3.3
    * 3.2: https://github.com/oneapi-src/oneDNN/releases/tag/v3.2
    * 3.1: https://github.com/oneapi-src/oneDNN/releases/tag/v3.1
  - Drop upstreamed onednn-fix-gcc13.patch
* Tue Mar 21 2023 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 3.0.1:
    * Changes: https://github.com/oneapi-src/oneDNN/releases/tag/v3.0.1
  - Skipped 3.0:
    * Changes: https://github.com/oneapi-src/oneDNN/releases/tag/v3.0
  - Add patch to fix build with GCC13:
    * onednn-fix-gcc13.patch
  - Disable Arm Compute library support until fixed upstream
    https://github.com/oneapi-src/oneDNN/issues/1599
  - Drop upstream patches:
    * 1428.patch
    * fa93750.patch
* Tue Sep 20 2022 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Add patch to fix build with latest Arm Compute Library:
    * 1428.patch
    * fa93750.patch (dep for 1428.patch)
* Tue Sep 13 2022 Paolo Stivanin <info@paolostivanin.com>
  - Update to 2.6.2:
    * https://github.com/oneapi-src/oneDNN/releases
  - Removed onednn-1045.patch.
  - Removed onednn-xbyak-aarch64.patch.
* Tue Jun 15 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Fix build on aarch64:
    * onednn-xbyak-aarch64.patch
* Tue Jun 15 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to version 2.2.4:
    * Fixed build error with GCC 11 (eda1add)
    * Fixed an issue with reorder reporting unimplemented when
      quantizing f32 weights to s8 (4f05b76, 5d3d1e1, cc77eef)
    * Updated name for GPU gen12 architecture to xe (3d202c2)
  - Drop upstream patch:
    * 0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
* Thu Jun 03 2021 Ferdinand Thiessen <rpm@fthiessen.de>
  - Update to version 2.2.3
    * Fixed a bug in int8 depthwise convolution ptimitive with groups
      and 1d spatial size for processors with AVX-512 and AVX2 support
    * Fixed correctness issue for PReLU primitive
    * Fixed corretness issue in reorder for blocked layouts with
      zero padding
    * Improved performance of weights reorders used by BRGEMM-based
      convolution primitive for processors with AVX-512 support
    * Added -fp-model=precise build flag for DPC++ code
    * Fixed potential memory leak in matmul primitive
    * Fixed performance of matmul primitive when fused with bias
      update and sum
    * Fixed a bug in matmul primitive when writing to non-contiguous
      destination buffer
  - Add upstream patch for GCC11 support
    * 0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
* Thu May 27 2021 Jan Engelhardt <jengelh@inai.de>
  - Update descriptions.
* Wed May 26 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 2.2.2, changes:
    * Fixed performance regression in fp32 forward inner product for
    shapes with number of output channels equal to 1 for processors
    with Intel AVX-512 support (714b1fd)
    * Fixed performance regression in forward convolutions with groups
    for processors with Intel AVX-512 support(3555d4a)
    * Removed -std=c++11 build flag for DPC++ headers (1fcb867)
    * Fixed buffer access in initializing workspace in RNN
    implementation on GPU (9b03091)
    * Fixed fix a bug in convolution with 1x1 kernel and mixed
    strides on processors with Intel AVX-512 support (d0b3e3f)
    * Used getauxval for Linux to get CPU features on for AArch64
    systems (25c4cea)
    * Added -fp-model=precise build flag for DPC++ code (3e40e5e)
    * Fixed out-of-bounds writes in elementwise primitive on
    Intel Processor Graphics (bcf823c)
  - Fix build with Arm Compute Library:
    * onednn-1045.patch
* Tue Apr 13 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 2.2.1, changes:
    * From 2.2:
    Fixed segfault for cases when primitive descriptor or attributed contain NaN (e6d05ec, dbca1e9, 0326b09, 0326b09)
    Fixed engine creation failure for GPU subdevices (4c3a114)
    Fixed long lines clipping in verbose output (70d70a8)
    Fixed segfault in bfloat16 convolution weight gradient implementation on processors with Intel AMX support (a3a73a3)
    Fixed performance regression in binary primitive with per_oc broadcast strategy (9ac85d8)
    Worked around a bug with Microsoft Visual C++ compiler version detection in CMake 3.19 (2f39155)
    Removed -std=c++11 build flag for DPC++ code to align with SYCL standard (1b026f5)
    * Changes between 2.1 and 2.2:
    Performance Optimizations
      Intel Architecture processors
      Improved performance of int8 compute functionality for future Intel Xeon Scalable processor (code name Sapphire Rapids). The functionality is disabled by default and should be enabled via CPU dispatcher control.
      Improved performance of compute functionality for future Intel Core processor with Intel AVX2 and Intel DL Boost instructions support (code name Alder Lake).
      Improved fp32 inner product forward propagation performance for processors with Intel AVX-512 support.
      Improved dnnl_gemm performance for cases with n=1 on all supported processors.
      Intel Graphics products
      Introduced NHWC format support for activations for int8 primitives.
      AArch64-based processors
      Improved performance of fp32 and int8 convolution, and softmax primitives for processors with SVE 512 support.
      Improved performance of fp32 convolution via Arm Compute Library (ACL).
      Improved performance of convolution with a combination of sum and relu post-ops via ACL.
    Functionality
      Extended eltwise primitive with support for mish and hardswish algorithms.
      Extended binary primitive with support for comparison operators.
      Introduced support for post-ops in GPU resampling implementation.
      Introduced asymmetric quantization support for int8 deconvolution.
      Introduced binary post-ops support for matmul primitive.
    Usability
      Improved presentation of oneDNN primitives in VTune Amplifier.
      Introduced Linux perf support for AArch64.
      Introduced support for Fujitsu C++ compiler.
      Introduced a build time check for minimal supported ACL version. Currently oneDNN requires ACL 21.02 or later.
      Added support for cuDNN 8.x
* Wed Feb 17 2021 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 2.1
  - Add Arm ComputeLibrary support on aarch64

Files

/usr/bin/benchdnn
/usr/share/benchdnn
/usr/share/benchdnn/inputs
/usr/share/benchdnn/inputs/binary
/usr/share/benchdnn/inputs/binary/harness_binary_bf16
/usr/share/benchdnn/inputs/binary/harness_binary_different_dt
/usr/share/benchdnn/inputs/binary/harness_binary_f16
/usr/share/benchdnn/inputs/binary/harness_binary_f32
/usr/share/benchdnn/inputs/binary/harness_binary_i8
/usr/share/benchdnn/inputs/binary/harness_binary_regression
/usr/share/benchdnn/inputs/binary/option_set_all
/usr/share/benchdnn/inputs/binary/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/binary/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/binary/option_set_minimal
/usr/share/benchdnn/inputs/binary/option_set_src0_bcast
/usr/share/benchdnn/inputs/binary/perf_binary_gpu
/usr/share/benchdnn/inputs/binary/shapes_ci
/usr/share/benchdnn/inputs/binary/shapes_perf_1st_conv
/usr/share/benchdnn/inputs/binary/shapes_perf_scaleshift
/usr/share/benchdnn/inputs/binary/test_binary_all
/usr/share/benchdnn/inputs/binary/test_binary_bfloat16
/usr/share/benchdnn/inputs/binary/test_binary_ci
/usr/share/benchdnn/inputs/binary/test_binary_different_dt_ci
/usr/share/benchdnn/inputs/binary/test_binary_float16
/usr/share/benchdnn/inputs/binary/test_binary_gpu
/usr/share/benchdnn/inputs/binary/test_binary_smoke
/usr/share/benchdnn/inputs/bnorm
/usr/share/benchdnn/inputs/bnorm/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/bnorm/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/bnorm/perf_bnorm_gpu
/usr/share/benchdnn/inputs/bnorm/set_nd
/usr/share/benchdnn/inputs/bnorm/shapes_1d
/usr/share/benchdnn/inputs/bnorm/shapes_2d
/usr/share/benchdnn/inputs/bnorm/shapes_3d
/usr/share/benchdnn/inputs/bnorm/shapes_ci
/usr/share/benchdnn/inputs/bnorm/shapes_densenet_121
/usr/share/benchdnn/inputs/bnorm/shapes_googlenet_v2
/usr/share/benchdnn/inputs/bnorm/shapes_googlenet_v3
/usr/share/benchdnn/inputs/bnorm/shapes_large
/usr/share/benchdnn/inputs/bnorm/shapes_regressions
/usr/share/benchdnn/inputs/bnorm/shapes_resnet_50
/usr/share/benchdnn/inputs/bnorm/shapes_topologies_small
/usr/share/benchdnn/inputs/bnorm/test_bnorm_all_blocked
/usr/share/benchdnn/inputs/bnorm/test_bnorm_all_plain
/usr/share/benchdnn/inputs/bnorm/test_bnorm_bfloat16_blocked
/usr/share/benchdnn/inputs/bnorm/test_bnorm_bfloat16_plain
/usr/share/benchdnn/inputs/bnorm/test_bnorm_ci
/usr/share/benchdnn/inputs/bnorm/test_bnorm_float16_plain
/usr/share/benchdnn/inputs/bnorm/test_bnorm_gpu
/usr/share/benchdnn/inputs/bnorm/test_bnorm_regressions
/usr/share/benchdnn/inputs/bnorm/test_bnorm_regressions_large
/usr/share/benchdnn/inputs/bnorm/test_bnorm_smoke
/usr/share/benchdnn/inputs/brgemm
/usr/share/benchdnn/inputs/brgemm/harness_brgemm_f32
/usr/share/benchdnn/inputs/brgemm/harness_brgemm_fpmath
/usr/share/benchdnn/inputs/brgemm/harness_brgemm_skip_acc
/usr/share/benchdnn/inputs/brgemm/option_set_bf16
/usr/share/benchdnn/inputs/brgemm/option_set_f32
/usr/share/benchdnn/inputs/brgemm/option_set_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_tail_n_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_tail_n_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_big_k_tail_n_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_no_tail_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_no_tail_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_no_tail_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_tail_n_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_tail_n_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_k_tail_n_int8
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_n_bf16
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_n_f32
/usr/share/benchdnn/inputs/brgemm/shapes_2d_tail_n_int8
/usr/share/benchdnn/inputs/brgemm/test_brgemm_all
/usr/share/benchdnn/inputs/brgemm/test_brgemm_bf16
/usr/share/benchdnn/inputs/brgemm/test_brgemm_ci
/usr/share/benchdnn/inputs/brgemm/test_brgemm_f16
/usr/share/benchdnn/inputs/brgemm/test_brgemm_int8
/usr/share/benchdnn/inputs/brgemm/test_brgemm_smoke
/usr/share/benchdnn/inputs/concat
/usr/share/benchdnn/inputs/concat/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/concat/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/concat/option_set_gen9_gpu
/usr/share/benchdnn/inputs/concat/test_concat_all
/usr/share/benchdnn/inputs/concat/test_concat_bfloat16
/usr/share/benchdnn/inputs/concat/test_concat_ci
/usr/share/benchdnn/inputs/concat/test_concat_float16
/usr/share/benchdnn/inputs/concat/test_concat_gpu
/usr/share/benchdnn/inputs/concat/test_concat_smoke
/usr/share/benchdnn/inputs/conv
/usr/share/benchdnn/inputs/conv/harness_conv_arbitrary_dst
/usr/share/benchdnn/inputs/conv/harness_conv_attrs_gpu
/usr/share/benchdnn/inputs/conv/harness_conv_attrs_int8
/usr/share/benchdnn/inputs/conv/harness_conv_attrs_int8_asymmetric
/usr/share/benchdnn/inputs/conv/harness_conv_auto
/usr/share/benchdnn/inputs/conv/harness_conv_deepbench
/usr/share/benchdnn/inputs/conv/harness_conv_depthwise_int8
/usr/share/benchdnn/inputs/conv/harness_conv_dilated_3d
/usr/share/benchdnn/inputs/conv/harness_conv_dilated_int8
/usr/share/benchdnn/inputs/conv/harness_conv_dw_bfloat16
/usr/share/benchdnn/inputs/conv/harness_conv_dw_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/harness_conv_dw_float16_nxc
/usr/share/benchdnn/inputs/conv/harness_conv_f32
/usr/share/benchdnn/inputs/conv/harness_conv_f32_nxc
/usr/share/benchdnn/inputs/conv/harness_conv_fused_depthwise
/usr/share/benchdnn/inputs/conv/harness_conv_int8
/usr/share/benchdnn/inputs/conv/harness_conv_regression_general
/usr/share/benchdnn/inputs/conv/harness_conv_saturation_int8
/usr/share/benchdnn/inputs/conv/harness_conv_tags
/usr/share/benchdnn/inputs/conv/option_set_all_eltwise_postops
/usr/share/benchdnn/inputs/conv/option_set_combined_postops
/usr/share/benchdnn/inputs/conv/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/conv/option_set_fwks_ext_gpu_reduced
/usr/share/benchdnn/inputs/conv/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/conv/perf_conv_bdw_1sock
/usr/share/benchdnn/inputs/conv/perf_conv_clx_1sock
/usr/share/benchdnn/inputs/conv/perf_conv_gen9
/usr/share/benchdnn/inputs/conv/perf_conv_skx_1sock
/usr/share/benchdnn/inputs/conv/perf_conv_xe_hp
/usr/share/benchdnn/inputs/conv/perf_conv_xe_lp
/usr/share/benchdnn/inputs/conv/set_all_topologies
/usr/share/benchdnn/inputs/conv/set_conv_3d
/usr/share/benchdnn/inputs/conv/set_conv_all
/usr/share/benchdnn/inputs/conv/set_conv_dw
/usr/share/benchdnn/inputs/conv/set_dilated-conv
/usr/share/benchdnn/inputs/conv/set_dilated-conv_1st
/usr/share/benchdnn/inputs/conv/set_dilated-conv_3d
/usr/share/benchdnn/inputs/conv/set_fastrcnn
/usr/share/benchdnn/inputs/conv/set_gpu
/usr/share/benchdnn/inputs/conv/set_maskrcnn
/usr/share/benchdnn/inputs/conv/set_perf_cpu_all_mb
/usr/share/benchdnn/inputs/conv/set_perf_cpu_inference_only
/usr/share/benchdnn/inputs/conv/set_perf_cpu_large_mb
/usr/share/benchdnn/inputs/conv/set_perf_cpu_small_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_all_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_large_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_small_mb
/usr/share/benchdnn/inputs/conv/set_topologies_inference_only
/usr/share/benchdnn/inputs/conv/shapes_1d
/usr/share/benchdnn/inputs/conv/shapes_1d_wavenet
/usr/share/benchdnn/inputs/conv/shapes_1x1
/usr/share/benchdnn/inputs/conv/shapes_3d
/usr/share/benchdnn/inputs/conv/shapes_3d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_gpu
/usr/share/benchdnn/inputs/conv/shapes_3d_i3d
/usr/share/benchdnn/inputs/conv/shapes_3d_resnext101
/usr/share/benchdnn/inputs/conv/shapes_3d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_unet
/usr/share/benchdnn/inputs/conv/shapes_3d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_a3c
/usr/share/benchdnn/inputs/conv/shapes_alexnet
/usr/share/benchdnn/inputs/conv/shapes_auto
/usr/share/benchdnn/inputs/conv/shapes_basic
/usr/share/benchdnn/inputs/conv/shapes_basic_gpu
/usr/share/benchdnn/inputs/conv/shapes_cosmictagger
/usr/share/benchdnn/inputs/conv/shapes_deepbench_inference_device
/usr/share/benchdnn/inputs/conv/shapes_deepbench_inference_server
/usr/share/benchdnn/inputs/conv/shapes_deepbench_training
/usr/share/benchdnn/inputs/conv/shapes_densnet
/usr/share/benchdnn/inputs/conv/shapes_dilated
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_rfcn
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_1d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_3d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_3d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_3d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_3d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_minibatch_2d-spatial
/usr/share/benchdnn/inputs/conv/shapes_dw_minibatch_channel_2d-spatial
/usr/share/benchdnn/inputs/conv/shapes_efficientdet
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p1
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p2
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p3
/usr/share/benchdnn/inputs/conv/shapes_ffn
/usr/share/benchdnn/inputs/conv/shapes_fused_large_src
/usr/share/benchdnn/inputs/conv/shapes_fused_mobilenet_stride_1
/usr/share/benchdnn/inputs/conv/shapes_fused_mobilenet_stride_2
/usr/share/benchdnn/inputs/conv/shapes_gemm
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v1
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v2
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v3
/usr/share/benchdnn/inputs/conv/shapes_large_padding
/usr/share/benchdnn/inputs/conv/shapes_maskrcnn_p1
/usr/share/benchdnn/inputs/conv/shapes_maskrcnn_p2
/usr/share/benchdnn/inputs/conv/shapes_mobilenet
/usr/share/benchdnn/inputs/conv/shapes_mobilenet_dw
/usr/share/benchdnn/inputs/conv/shapes_movinet_dw
/usr/share/benchdnn/inputs/conv/shapes_pointnet
/usr/share/benchdnn/inputs/conv/shapes_regression_1x1
/usr/share/benchdnn/inputs/conv/shapes_regression_dw
/usr/share/benchdnn/inputs/conv/shapes_regression_gemm
/usr/share/benchdnn/inputs/conv/shapes_regression_padding
/usr/share/benchdnn/inputs/conv/shapes_regression_small_spatial
/usr/share/benchdnn/inputs/conv/shapes_resnet_50
/usr/share/benchdnn/inputs/conv/shapes_resnet_50_sparse
/usr/share/benchdnn/inputs/conv/shapes_resnet_50_v1_5
/usr/share/benchdnn/inputs/conv/shapes_resnext_101
/usr/share/benchdnn/inputs/conv/shapes_segnet
/usr/share/benchdnn/inputs/conv/shapes_src-transpose_padding
/usr/share/benchdnn/inputs/conv/shapes_ssd_300_voc0712
/usr/share/benchdnn/inputs/conv/shapes_ssd_mobilenet
/usr/share/benchdnn/inputs/conv/shapes_ssd_resnet34_inference
/usr/share/benchdnn/inputs/conv/shapes_ssd_resnet34_training
/usr/share/benchdnn/inputs/conv/shapes_tails
/usr/share/benchdnn/inputs/conv/shapes_tails_gpu
/usr/share/benchdnn/inputs/conv/shapes_unet
/usr/share/benchdnn/inputs/conv/shapes_vgg_11
/usr/share/benchdnn/inputs/conv/shapes_vgg_19
/usr/share/benchdnn/inputs/conv/shapes_x3d_dw
/usr/share/benchdnn/inputs/conv/shapes_xception
/usr/share/benchdnn/inputs/conv/shapes_yolov2
/usr/share/benchdnn/inputs/conv/test_conv_3d
/usr/share/benchdnn/inputs/conv/test_conv_3d_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_all
/usr/share/benchdnn/inputs/conv/test_conv_all_topologies
/usr/share/benchdnn/inputs/conv/test_conv_all_topologies_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_attrs
/usr/share/benchdnn/inputs/conv/test_conv_attrs_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_bfloat16
/usr/share/benchdnn/inputs/conv/test_conv_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/test_conv_bfloat16_ymm
/usr/share/benchdnn/inputs/conv/test_conv_ci
/usr/share/benchdnn/inputs/conv/test_conv_depthwise
/usr/share/benchdnn/inputs/conv/test_conv_dilated
/usr/share/benchdnn/inputs/conv/test_conv_dilated_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_dt
/usr/share/benchdnn/inputs/conv/test_conv_dt_nxc
/usr/share/benchdnn/inputs/conv/test_conv_float16_nxc
/usr/share/benchdnn/inputs/conv/test_conv_function
/usr/share/benchdnn/inputs/conv/test_conv_gemm_bfloat16
/usr/share/benchdnn/inputs/conv/test_conv_gemm_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/test_conv_gemm_dt
/usr/share/benchdnn/inputs/conv/test_conv_gemm_dt_nxc
/usr/share/benchdnn/inputs/conv/test_conv_gemm_int8
/usr/share/benchdnn/inputs/conv/test_conv_gpu
/usr/share/benchdnn/inputs/conv/test_conv_gpu_ci
/usr/share/benchdnn/inputs/conv/test_conv_int8
/usr/share/benchdnn/inputs/conv/test_conv_regression
/usr/share/benchdnn/inputs/conv/test_conv_regression_gpu
/usr/share/benchdnn/inputs/conv/test_conv_smoke
/usr/share/benchdnn/inputs/conv/test_conv_wino_f32
/usr/share/benchdnn/inputs/conv/test_conv_wino_gpu
/usr/share/benchdnn/inputs/deconv
/usr/share/benchdnn/inputs/deconv/harness_deconv_attrs_int8
/usr/share/benchdnn/inputs/deconv/harness_deconv_attrs_int8_asymmetric
/usr/share/benchdnn/inputs/deconv/harness_deconv_regression_general_f32
/usr/share/benchdnn/inputs/deconv/harness_deconv_regression_general_int8
/usr/share/benchdnn/inputs/deconv/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/deconv/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/deconv/set_all
/usr/share/benchdnn/inputs/deconv/shapes_1d
/usr/share/benchdnn/inputs/deconv/shapes_1x1
/usr/share/benchdnn/inputs/deconv/shapes_2d
/usr/share/benchdnn/inputs/deconv/shapes_3d
/usr/share/benchdnn/inputs/deconv/shapes_ci
/usr/share/benchdnn/inputs/deconv/shapes_dilated
/usr/share/benchdnn/inputs/deconv/test_deconv_all
/usr/share/benchdnn/inputs/deconv/test_deconv_all_f32_nxc
/usr/share/benchdnn/inputs/deconv/test_deconv_bfloat16
/usr/share/benchdnn/inputs/deconv/test_deconv_bfloat16_nxc
/usr/share/benchdnn/inputs/deconv/test_deconv_bfloat16_ymm
/usr/share/benchdnn/inputs/deconv/test_deconv_ci
/usr/share/benchdnn/inputs/deconv/test_deconv_float16_nxc
/usr/share/benchdnn/inputs/deconv/test_deconv_gpu
/usr/share/benchdnn/inputs/deconv/test_deconv_int8
/usr/share/benchdnn/inputs/deconv/test_deconv_smoke
/usr/share/benchdnn/inputs/eltwise
/usr/share/benchdnn/inputs/eltwise/harness_eltwise_large_buffer
/usr/share/benchdnn/inputs/eltwise/harness_eltwise_regression
/usr/share/benchdnn/inputs/eltwise/harness_eltwise_saturation
/usr/share/benchdnn/inputs/eltwise/option_set_all_algs
/usr/share/benchdnn/inputs/eltwise/option_set_all_algs_ci
/usr/share/benchdnn/inputs/eltwise/option_set_all_algs_int8
/usr/share/benchdnn/inputs/eltwise/option_set_all_algs_int8_ci
/usr/share/benchdnn/inputs/eltwise/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/eltwise/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/eltwise/shapes_ci
/usr/share/benchdnn/inputs/eltwise/shapes_eltwise
/usr/share/benchdnn/inputs/eltwise/shapes_large_buffer
/usr/share/benchdnn/inputs/eltwise/test_eltwise_all
/usr/share/benchdnn/inputs/eltwise/test_eltwise_bfloat16
/usr/share/benchdnn/inputs/eltwise/test_eltwise_ci
/usr/share/benchdnn/inputs/eltwise/test_eltwise_float16
/usr/share/benchdnn/inputs/eltwise/test_eltwise_gpu
/usr/share/benchdnn/inputs/eltwise/test_eltwise_smoke
/usr/share/benchdnn/inputs/gnorm
/usr/share/benchdnn/inputs/gnorm/shapes_all
/usr/share/benchdnn/inputs/gnorm/shapes_ci
/usr/share/benchdnn/inputs/gnorm/test_gnorm_all
/usr/share/benchdnn/inputs/gnorm/test_gnorm_ci
/usr/share/benchdnn/inputs/graph
/usr/share/benchdnn/inputs/graph/op
/usr/share/benchdnn/inputs/graph/op/bf16
/usr/share/benchdnn/inputs/graph/op/bf16/abs.json
/usr/share/benchdnn/inputs/graph/op/bf16/abs_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/add.json
/usr/share/benchdnn/inputs/graph/op/bf16/avgpool.json
/usr/share/benchdnn/inputs/graph/op/bf16/avgpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/biasadd.json
/usr/share/benchdnn/inputs/graph/op/bf16/biasadd_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/bnorm.json
/usr/share/benchdnn/inputs/graph/op/bf16/bnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/bnorm_fwd_d.json
/usr/share/benchdnn/inputs/graph/op/bf16/clamp.json
/usr/share/benchdnn/inputs/graph/op/bf16/clamp_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/concat.json
/usr/share/benchdnn/inputs/graph/op/bf16/concat_2.json
/usr/share/benchdnn/inputs/graph/op/bf16/concat_3.json
/usr/share/benchdnn/inputs/graph/op/bf16/conv_2d.json
/usr/share/benchdnn/inputs/graph/op/bf16/conv_bwd_d_2d.json
/usr/share/benchdnn/inputs/graph/op/bf16/conv_bwd_w_2d.json
/usr/share/benchdnn/inputs/graph/op/bf16/deconv.json
/usr/share/benchdnn/inputs/graph/op/bf16/deconv_bwd_d.json
/usr/share/benchdnn/inputs/graph/op/bf16/deconv_bwd_w.json
/usr/share/benchdnn/inputs/graph/op/bf16/div.json
/usr/share/benchdnn/inputs/graph/op/bf16/elu.json
/usr/share/benchdnn/inputs/graph/op/bf16/elu_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/exp.json
/usr/share/benchdnn/inputs/graph/op/bf16/gelu.json
/usr/share/benchdnn/inputs/graph/op/bf16/gelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/hardsigmoid.json
/usr/share/benchdnn/inputs/graph/op/bf16/hardsigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/hardswish.json
/usr/share/benchdnn/inputs/graph/op/bf16/hardswish_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/interpolate.json
/usr/share/benchdnn/inputs/graph/op/bf16/interpolate_2.json
/usr/share/benchdnn/inputs/graph/op/bf16/interpolate_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/interpolate_bwd_1d.json
/usr/share/benchdnn/inputs/graph/op/bf16/interpolate_bwd_2d.json
/usr/share/benchdnn/inputs/graph/op/bf16/leakyrelu.json
/usr/share/benchdnn/inputs/graph/op/bf16/lnorm.json
/usr/share/benchdnn/inputs/graph/op/bf16/lnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/lnorm_ks.json
/usr/share/benchdnn/inputs/graph/op/bf16/log.json
/usr/share/benchdnn/inputs/graph/op/bf16/logsoftmax.json
/usr/share/benchdnn/inputs/graph/op/bf16/logsoftmax_3d.json
/usr/share/benchdnn/inputs/graph/op/bf16/logsoftmax_3d_2.json
/usr/share/benchdnn/inputs/graph/op/bf16/logsoftmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/matmul.json
/usr/share/benchdnn/inputs/graph/op/bf16/max.json
/usr/share/benchdnn/inputs/graph/op/bf16/maxpool.json
/usr/share/benchdnn/inputs/graph/op/bf16/maxpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/min.json
/usr/share/benchdnn/inputs/graph/op/bf16/mish.json
/usr/share/benchdnn/inputs/graph/op/bf16/mish_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/mul.json
/usr/share/benchdnn/inputs/graph/op/bf16/prelu.json
/usr/share/benchdnn/inputs/graph/op/bf16/prelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/reciprocal.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducel1.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducel2.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducemax.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducemean.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducemin.json
/usr/share/benchdnn/inputs/graph/op/bf16/reduceprod.json
/usr/share/benchdnn/inputs/graph/op/bf16/reducesum.json
/usr/share/benchdnn/inputs/graph/op/bf16/relu.json
/usr/share/benchdnn/inputs/graph/op/bf16/relu_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/reorder.json
/usr/share/benchdnn/inputs/graph/op/bf16/sigmoid.json
/usr/share/benchdnn/inputs/graph/op/bf16/sigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/softmax.json
/usr/share/benchdnn/inputs/graph/op/bf16/softmax_3d.json
/usr/share/benchdnn/inputs/graph/op/bf16/softmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/softmax_bwd_d_3d.json
/usr/share/benchdnn/inputs/graph/op/bf16/sqrt.json
/usr/share/benchdnn/inputs/graph/op/bf16/sqrt_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/square.json
/usr/share/benchdnn/inputs/graph/op/bf16/sub.json
/usr/share/benchdnn/inputs/graph/op/bf16/tanh.json
/usr/share/benchdnn/inputs/graph/op/bf16/tanh_bwd.json
/usr/share/benchdnn/inputs/graph/op/bf16/typecast.json
/usr/share/benchdnn/inputs/graph/op/f16
/usr/share/benchdnn/inputs/graph/op/f16/abs.json
/usr/share/benchdnn/inputs/graph/op/f16/abs_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/add.json
/usr/share/benchdnn/inputs/graph/op/f16/avgpool.json
/usr/share/benchdnn/inputs/graph/op/f16/avgpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/biasadd.json
/usr/share/benchdnn/inputs/graph/op/f16/biasadd_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/bnorm.json
/usr/share/benchdnn/inputs/graph/op/f16/bnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/bnorm_fwd_d.json
/usr/share/benchdnn/inputs/graph/op/f16/clamp.json
/usr/share/benchdnn/inputs/graph/op/f16/clamp_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/concat.json
/usr/share/benchdnn/inputs/graph/op/f16/conv_2d.json
/usr/share/benchdnn/inputs/graph/op/f16/conv_bwd_d_2d.json
/usr/share/benchdnn/inputs/graph/op/f16/conv_bwd_w_2d.json
/usr/share/benchdnn/inputs/graph/op/f16/deconv.json
/usr/share/benchdnn/inputs/graph/op/f16/deconv_bwd_d.json
/usr/share/benchdnn/inputs/graph/op/f16/deconv_bwd_w.json
/usr/share/benchdnn/inputs/graph/op/f16/div.json
/usr/share/benchdnn/inputs/graph/op/f16/elu.json
/usr/share/benchdnn/inputs/graph/op/f16/elu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/exp.json
/usr/share/benchdnn/inputs/graph/op/f16/gelu.json
/usr/share/benchdnn/inputs/graph/op/f16/gelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/hardsigmoid.json
/usr/share/benchdnn/inputs/graph/op/f16/hardsigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/hardswish.json
/usr/share/benchdnn/inputs/graph/op/f16/hardswish_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/interpolate.json
/usr/share/benchdnn/inputs/graph/op/f16/interpolate_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/leakyrelu.json
/usr/share/benchdnn/inputs/graph/op/f16/lnorm.json
/usr/share/benchdnn/inputs/graph/op/f16/lnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/lnorm_ks.json
/usr/share/benchdnn/inputs/graph/op/f16/log.json
/usr/share/benchdnn/inputs/graph/op/f16/logsoftmax.json
/usr/share/benchdnn/inputs/graph/op/f16/logsoftmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/matmul.json
/usr/share/benchdnn/inputs/graph/op/f16/max.json
/usr/share/benchdnn/inputs/graph/op/f16/maxpool.json
/usr/share/benchdnn/inputs/graph/op/f16/maxpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/min.json
/usr/share/benchdnn/inputs/graph/op/f16/mish.json
/usr/share/benchdnn/inputs/graph/op/f16/mish_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/mul.json
/usr/share/benchdnn/inputs/graph/op/f16/prelu.json
/usr/share/benchdnn/inputs/graph/op/f16/prelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/reciprocal.json
/usr/share/benchdnn/inputs/graph/op/f16/reducel1.json
/usr/share/benchdnn/inputs/graph/op/f16/reducel2.json
/usr/share/benchdnn/inputs/graph/op/f16/reducemax.json
/usr/share/benchdnn/inputs/graph/op/f16/reducemean.json
/usr/share/benchdnn/inputs/graph/op/f16/reducemin.json
/usr/share/benchdnn/inputs/graph/op/f16/reduceprod.json
/usr/share/benchdnn/inputs/graph/op/f16/reducesum.json
/usr/share/benchdnn/inputs/graph/op/f16/relu.json
/usr/share/benchdnn/inputs/graph/op/f16/relu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/reorder.json
/usr/share/benchdnn/inputs/graph/op/f16/sigmoid.json
/usr/share/benchdnn/inputs/graph/op/f16/sigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/softmax.json
/usr/share/benchdnn/inputs/graph/op/f16/softmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/sqrt.json
/usr/share/benchdnn/inputs/graph/op/f16/sqrt_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/square.json
/usr/share/benchdnn/inputs/graph/op/f16/sub.json
/usr/share/benchdnn/inputs/graph/op/f16/tanh.json
/usr/share/benchdnn/inputs/graph/op/f16/tanh_bwd.json
/usr/share/benchdnn/inputs/graph/op/f16/typecast.json
/usr/share/benchdnn/inputs/graph/op/f32
/usr/share/benchdnn/inputs/graph/op/f32/abs.json
/usr/share/benchdnn/inputs/graph/op/f32/abs_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/add.json
/usr/share/benchdnn/inputs/graph/op/f32/add_0d.json
/usr/share/benchdnn/inputs/graph/op/f32/avgpool.json
/usr/share/benchdnn/inputs/graph/op/f32/avgpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/avgpool_bwd_2.json
/usr/share/benchdnn/inputs/graph/op/f32/biasadd.json
/usr/share/benchdnn/inputs/graph/op/f32/biasadd_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/bnorm.json
/usr/share/benchdnn/inputs/graph/op/f32/bnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/bnorm_fwd_d.json
/usr/share/benchdnn/inputs/graph/op/f32/clamp.json
/usr/share/benchdnn/inputs/graph/op/f32/clamp_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/concat.json
/usr/share/benchdnn/inputs/graph/op/f32/concat_2.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_2d.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_bwd_d_2d.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_bwd_d_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_bwd_w_1d.json
/usr/share/benchdnn/inputs/graph/op/f32/conv_bwd_w_2d.json
/usr/share/benchdnn/inputs/graph/op/f32/deconv.json
/usr/share/benchdnn/inputs/graph/op/f32/deconv_bwd_d.json
/usr/share/benchdnn/inputs/graph/op/f32/deconv_bwd_d_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/deconv_bwd_w.json
/usr/share/benchdnn/inputs/graph/op/f32/dequantize_s8.json
/usr/share/benchdnn/inputs/graph/op/f32/dequantize_u8.json
/usr/share/benchdnn/inputs/graph/op/f32/div.json
/usr/share/benchdnn/inputs/graph/op/f32/dynamicdq.json
/usr/share/benchdnn/inputs/graph/op/f32/dynamicq.json
/usr/share/benchdnn/inputs/graph/op/f32/elu.json
/usr/share/benchdnn/inputs/graph/op/f32/elu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/exp.json
/usr/share/benchdnn/inputs/graph/op/f32/gelu.json
/usr/share/benchdnn/inputs/graph/op/f32/gelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/hardsigmoid.json
/usr/share/benchdnn/inputs/graph/op/f32/hardsigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/hardswish.json
/usr/share/benchdnn/inputs/graph/op/f32/hardswish_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/interpolate.json
/usr/share/benchdnn/inputs/graph/op/f32/interpolate_2d.json
/usr/share/benchdnn/inputs/graph/op/f32/interpolate_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/interpolate_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/interpolate_bwd_1d.json
/usr/share/benchdnn/inputs/graph/op/f32/leakyrelu.json
/usr/share/benchdnn/inputs/graph/op/f32/lnorm.json
/usr/share/benchdnn/inputs/graph/op/f32/lnorm_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/lnorm_3d_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/lnorm_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/lnorm_ks.json
/usr/share/benchdnn/inputs/graph/op/f32/log.json
/usr/share/benchdnn/inputs/graph/op/f32/logsoftmax.json
/usr/share/benchdnn/inputs/graph/op/f32/logsoftmax_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/logsoftmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/logsoftmax_bwd_d_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/matmul.json
/usr/share/benchdnn/inputs/graph/op/f32/matmul_2d_4d.json
/usr/share/benchdnn/inputs/graph/op/f32/max.json
/usr/share/benchdnn/inputs/graph/op/f32/maxpool.json
/usr/share/benchdnn/inputs/graph/op/f32/maxpool_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/maxpool_bwd_2.json
/usr/share/benchdnn/inputs/graph/op/f32/min.json
/usr/share/benchdnn/inputs/graph/op/f32/mish.json
/usr/share/benchdnn/inputs/graph/op/f32/mish_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/mul.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu_bwd_dw_1d.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu_bwd_dw_2d.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu_bwd_dw_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/prelu_bwd_dw_5d.json
/usr/share/benchdnn/inputs/graph/op/f32/quantize.json
/usr/share/benchdnn/inputs/graph/op/f32/reciprocal.json
/usr/share/benchdnn/inputs/graph/op/f32/reducel1.json
/usr/share/benchdnn/inputs/graph/op/f32/reducel2.json
/usr/share/benchdnn/inputs/graph/op/f32/reducemax.json
/usr/share/benchdnn/inputs/graph/op/f32/reducemean.json
/usr/share/benchdnn/inputs/graph/op/f32/reducemin.json
/usr/share/benchdnn/inputs/graph/op/f32/reduceprod.json
/usr/share/benchdnn/inputs/graph/op/f32/reducesum.json
/usr/share/benchdnn/inputs/graph/op/f32/relu.json
/usr/share/benchdnn/inputs/graph/op/f32/relu_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/reorder.json
/usr/share/benchdnn/inputs/graph/op/f32/reorder_2.json
/usr/share/benchdnn/inputs/graph/op/f32/reorder_3.json
/usr/share/benchdnn/inputs/graph/op/f32/round.json
/usr/share/benchdnn/inputs/graph/op/f32/sigmoid.json
/usr/share/benchdnn/inputs/graph/op/f32/sigmoid_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/softmax.json
/usr/share/benchdnn/inputs/graph/op/f32/softmax_3d.json
/usr/share/benchdnn/inputs/graph/op/f32/softmax_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/softplus_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/sqrt.json
/usr/share/benchdnn/inputs/graph/op/f32/sqrt_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/square.json
/usr/share/benchdnn/inputs/graph/op/f32/sub.json
/usr/share/benchdnn/inputs/graph/op/f32/tanh.json
/usr/share/benchdnn/inputs/graph/op/f32/tanh_bwd.json
/usr/share/benchdnn/inputs/graph/op/f32/tanh_bwd_2.json
/usr/share/benchdnn/inputs/graph/op/f32/typecast.json
/usr/share/benchdnn/inputs/graph/op/harness_bf16_all
/usr/share/benchdnn/inputs/graph/op/harness_bf16_ci
/usr/share/benchdnn/inputs/graph/op/harness_f16_all
/usr/share/benchdnn/inputs/graph/op/harness_f16_ci
/usr/share/benchdnn/inputs/graph/op/harness_f32_all
/usr/share/benchdnn/inputs/graph/op/harness_f32_ci
/usr/share/benchdnn/inputs/graph/pattern
/usr/share/benchdnn/inputs/graph/pattern/bf16
/usr/share/benchdnn/inputs/graph/pattern/bf16/binary_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/bn_bwd_relu_bwd_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/bn_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/conv_bias_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/conv_depthwise_fusion_cpu.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/conv_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/convtranspose_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/interpolate_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/interpolate_post_ops_chain_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_bias_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_bias_post_ops_clip_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_bias_post_ops_elu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_post_ops_clip_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_post_ops_relu_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/matmul_post_ops_sum_logistic_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/reciprocal_multiply_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/reduction_post_ops_l1_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/reduction_post_ops_mean_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/reduction_post_ops_min_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/reduction_post_ops_sum_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/bf16/shuffle_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16
/usr/share/benchdnn/inputs/graph/pattern/f16/binary_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/bn_bwd_relu_bwd_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/bn_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/conv_bias_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/conv_depthwise_fusion_cpu.json
/usr/share/benchdnn/inputs/graph/pattern/f16/conv_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/convtranspose_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/matmul_bias_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/matmul_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f16/reciprocal_multiply_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32
/usr/share/benchdnn/inputs/graph/pattern/f32/avgpool_3d_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_2d_post_ops_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_2d_post_ops_sum_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_3d_post_ops_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_4d_post_ops_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_4d_post_ops_sum_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/binary_post_ops_logistic_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/bn_bwd_relu_bwd_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/bn_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_add_sigmoid_multiply_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_sum_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_sum_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_bias_swish_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_depthwise_fusion_cpu.json
/usr/share/benchdnn/inputs/graph/pattern/f32/conv_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/convtranspose_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/interpolate_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/interpolate_post_ops_chain_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/f32/matmul_bias_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/matmul_post_ops_add_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/matmul_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/matmul_post_ops_sum_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/matmul_post_ops_swish_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/maxpool_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/maxpool_sum_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/reciprocal_multiply_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/reduction_post_ops_l2_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/reduction_post_ops_max_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/reduction_post_ops_prod_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/shuffle_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/softmax_post_ops_binary_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/softmax_post_ops_unary_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_elu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_gelu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_hardswish_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_hardswish_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_log_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_round_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_sqrt_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_square_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/f32/unary_post_ops_tanh_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/harness_bf16_all
/usr/share/benchdnn/inputs/graph/pattern/harness_bf16_ci
/usr/share/benchdnn/inputs/graph/pattern/harness_f16_all
/usr/share/benchdnn/inputs/graph/pattern/harness_f16_ci
/usr/share/benchdnn/inputs/graph/pattern/harness_f32_all
/usr/share/benchdnn/inputs/graph/pattern/harness_f32_ci
/usr/share/benchdnn/inputs/graph/pattern/harness_int8_all
/usr/share/benchdnn/inputs/graph/pattern/harness_int8_ci
/usr/share/benchdnn/inputs/graph/pattern/int8
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_avgpool_reshape_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_avgpool_transpose_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_conv_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_conv_add_relu_mul.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_conv_relu_mul.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_add_mul_relu.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_mul_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_mul_add_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_mul_w_smooth_quant_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_relu_w_smooth_quant_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bf16_matmul_sum_add_mul_relu.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_bnorm_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_concat_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_concat_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_concat_fusion_3.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_2d_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_2d_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_2d_fwd_i_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_add_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_add_mul_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_bias_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_bias_mish_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_bias_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_bias_relu_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_bias_relu_fusion_3.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_post_ops_int8_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_conv_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_chain_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_square_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_sum_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_convtranspose_post_ops_sum_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_depthwise_conv_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_f32_matmul_mul_add_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_f32_matmul_mul_add_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_add_mul_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_add_mul_relu.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_bia_relu_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_bias_sum_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_logistic_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_mul_add_mul_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_post_ops_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_matmul_sum_add_mul_relu.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_maxpool_add_mul_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_reorder_fusion.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_reorder_fusion_2.json
/usr/share/benchdnn/inputs/graph/pattern/int8/int8_reorder_fusion_3.json
/usr/share/benchdnn/inputs/graph/test_graph_all
/usr/share/benchdnn/inputs/graph/test_graph_bf16
/usr/share/benchdnn/inputs/graph/test_graph_bf16_gpu
/usr/share/benchdnn/inputs/graph/test_graph_ci
/usr/share/benchdnn/inputs/graph/test_graph_f16
/usr/share/benchdnn/inputs/graph/test_graph_f16_gpu
/usr/share/benchdnn/inputs/graph/test_graph_f32
/usr/share/benchdnn/inputs/graph/test_graph_f32_gpu
/usr/share/benchdnn/inputs/graph/test_graph_int8
/usr/share/benchdnn/inputs/graph/test_graph_int8_gpu
/usr/share/benchdnn/inputs/ip
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_lb_bfloat16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_lb_f32
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_lb_float16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_sb_bfloat16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_sb_f32
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_2016-32_inf_sb_float16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_lb_bfloat16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_lb_f32
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_lb_float16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_sb_bfloat16
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_sb_f32
/usr/share/benchdnn/inputs/ip/harness_ip_gpt-j_32-32_inf_sb_float16
/usr/share/benchdnn/inputs/ip/harness_ip_regression
/usr/share/benchdnn/inputs/ip/harness_ip_sanitizers
/usr/share/benchdnn/inputs/ip/harness_ip_saturation
/usr/share/benchdnn/inputs/ip/harness_ip_tag
/usr/share/benchdnn/inputs/ip/harness_ip_tag_gpu
/usr/share/benchdnn/inputs/ip/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/ip/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/ip/option_set_fwks_key_perf_gpu
/usr/share/benchdnn/inputs/ip/perf_ip_cpu
/usr/share/benchdnn/inputs/ip/perf_ip_gen9
/usr/share/benchdnn/inputs/ip/perf_ip_inference_lb
/usr/share/benchdnn/inputs/ip/perf_ip_inference_sb
/usr/share/benchdnn/inputs/ip/perf_ip_knx
/usr/share/benchdnn/inputs/ip/perf_ip_training
/usr/share/benchdnn/inputs/ip/perf_ip_xe_hp
/usr/share/benchdnn/inputs/ip/perf_ip_xe_lp
/usr/share/benchdnn/inputs/ip/set_all
/usr/share/benchdnn/inputs/ip/set_gpu
/usr/share/benchdnn/inputs/ip/set_topologies
/usr/share/benchdnn/inputs/ip/shapes_0d
/usr/share/benchdnn/inputs/ip/shapes_0d_gpu
/usr/share/benchdnn/inputs/ip/shapes_1d
/usr/share/benchdnn/inputs/ip/shapes_3d
/usr/share/benchdnn/inputs/ip/shapes_alexnet
/usr/share/benchdnn/inputs/ip/shapes_bert
/usr/share/benchdnn/inputs/ip/shapes_bert_large
/usr/share/benchdnn/inputs/ip/shapes_ci
/usr/share/benchdnn/inputs/ip/shapes_dien_sb
/usr/share/benchdnn/inputs/ip/shapes_dlrm
/usr/share/benchdnn/inputs/ip/shapes_gnmt
/usr/share/benchdnn/inputs/ip/shapes_googlenet_v1
/usr/share/benchdnn/inputs/ip/shapes_googlenet_v3
/usr/share/benchdnn/inputs/ip/shapes_maskrcnn
/usr/share/benchdnn/inputs/ip/shapes_ncf
/usr/share/benchdnn/inputs/ip/shapes_regression
/usr/share/benchdnn/inputs/ip/shapes_resnet_50
/usr/share/benchdnn/inputs/ip/shapes_resnet_50_sparse
/usr/share/benchdnn/inputs/ip/shapes_rnn_t
/usr/share/benchdnn/inputs/ip/shapes_transformer_lt
/usr/share/benchdnn/inputs/ip/shapes_vgg16
/usr/share/benchdnn/inputs/ip/shapes_wd
/usr/share/benchdnn/inputs/ip/test_ip_acl
/usr/share/benchdnn/inputs/ip/test_ip_all
/usr/share/benchdnn/inputs/ip/test_ip_bf32_bfloat16
/usr/share/benchdnn/inputs/ip/test_ip_bfloat16
/usr/share/benchdnn/inputs/ip/test_ip_bfloat16_ymm
/usr/share/benchdnn/inputs/ip/test_ip_ci
/usr/share/benchdnn/inputs/ip/test_ip_float16
/usr/share/benchdnn/inputs/ip/test_ip_gpu
/usr/share/benchdnn/inputs/ip/test_ip_int8
/usr/share/benchdnn/inputs/ip/test_ip_smoke
/usr/share/benchdnn/inputs/lnorm
/usr/share/benchdnn/inputs/lnorm/option_set_all
/usr/share/benchdnn/inputs/lnorm/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/lnorm/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/lnorm/shapes_ci
/usr/share/benchdnn/inputs/lnorm/test_lnorm_all
/usr/share/benchdnn/inputs/lnorm/test_lnorm_bfloat16
/usr/share/benchdnn/inputs/lnorm/test_lnorm_ci
/usr/share/benchdnn/inputs/lnorm/test_lnorm_float16
/usr/share/benchdnn/inputs/lnorm/test_lnorm_gpu
/usr/share/benchdnn/inputs/lnorm/test_lnorm_int8
/usr/share/benchdnn/inputs/lnorm/test_lnorm_smoke
/usr/share/benchdnn/inputs/lrn
/usr/share/benchdnn/inputs/lrn/set_all
/usr/share/benchdnn/inputs/lrn/shapes_0d
/usr/share/benchdnn/inputs/lrn/shapes_2d
/usr/share/benchdnn/inputs/lrn/shapes_3d
/usr/share/benchdnn/inputs/lrn/shapes_ci
/usr/share/benchdnn/inputs/lrn/shapes_topologies
/usr/share/benchdnn/inputs/lrn/test_lrn_all
/usr/share/benchdnn/inputs/lrn/test_lrn_bfloat16
/usr/share/benchdnn/inputs/lrn/test_lrn_ci
/usr/share/benchdnn/inputs/lrn/test_lrn_float16
/usr/share/benchdnn/inputs/lrn/test_lrn_gpu
/usr/share/benchdnn/inputs/lrn/test_lrn_smoke
/usr/share/benchdnn/inputs/matmul
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_inf_lb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_inf_lb_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_inf_sb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_inf_sb_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_tr_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_bert_tr_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_data_tags
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_lb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_lb_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_lb_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_sb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_sb_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_2016-32_inf_sb_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_lb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_lb_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_lb_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_sb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_sb_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_gpt-j_32-32_inf_sb_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_regression_bf16
/usr/share/benchdnn/inputs/matmul/harness_matmul_regression_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_regression_float16
/usr/share/benchdnn/inputs/matmul/harness_matmul_regression_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_runtime_f32
/usr/share/benchdnn/inputs/matmul/harness_matmul_runtime_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_strides
/usr/share/benchdnn/inputs/matmul/harness_matmul_transformer_lt_inf_lb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_transformer_lt_inf_lb_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_transformer_lt_inf_sb_bfloat16
/usr/share/benchdnn/inputs/matmul/harness_matmul_transformer_lt_inf_sb_int8
/usr/share/benchdnn/inputs/matmul/harness_matmul_transformer_lt_tr_bfloat16
/usr/share/benchdnn/inputs/matmul/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/matmul/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/matmul/option_set_fwks_key_gpu_tf32
/usr/share/benchdnn/inputs/matmul/option_set_fwks_key_perf_gpu
/usr/share/benchdnn/inputs/matmul/perf_matmul_inference_batched
/usr/share/benchdnn/inputs/matmul/perf_matmul_inference_lb
/usr/share/benchdnn/inputs/matmul/perf_matmul_training
/usr/share/benchdnn/inputs/matmul/shapes_2d
/usr/share/benchdnn/inputs/matmul/shapes_2d_ci
/usr/share/benchdnn/inputs/matmul/shapes_3d
/usr/share/benchdnn/inputs/matmul/shapes_4d
/usr/share/benchdnn/inputs/matmul/shapes_bert
/usr/share/benchdnn/inputs/matmul/shapes_bert_large
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_alexnet
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_dlrm
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_gmnt
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_googlenet
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_maskrcnn
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_ncf
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_resnet
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_rnn_t
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_vgg16
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_lb_wd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_inf_sb_dien
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_alexnet_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_alexnet_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_alexnet_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_dlrm_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_dlrm_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_dlrm_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_gmnt_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_gmnt_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_gmnt_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_googlenet_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_googlenet_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_googlenet_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_maskrcnn_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_maskrcnn_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_maskrcnn_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_ncf_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_ncf_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_ncf_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_resnet_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_resnet_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_resnet_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_rnn_t_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_rnn_t_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_rnn_t_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_vgg16_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_vgg16_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_vgg16_fwd
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_wd_bwd_d
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_wd_bwd_w
/usr/share/benchdnn/inputs/matmul/shapes_converted_ip_tr_wd_fwd
/usr/share/benchdnn/inputs/matmul/shapes_multidim
/usr/share/benchdnn/inputs/matmul/shapes_sparse
/usr/share/benchdnn/inputs/matmul/shapes_transformer
/usr/share/benchdnn/inputs/matmul/test_matmul_all
/usr/share/benchdnn/inputs/matmul/test_matmul_bf32_bf16
/usr/share/benchdnn/inputs/matmul/test_matmul_bfloat16
/usr/share/benchdnn/inputs/matmul/test_matmul_bfloat16_ymm
/usr/share/benchdnn/inputs/matmul/test_matmul_ci
/usr/share/benchdnn/inputs/matmul/test_matmul_float16
/usr/share/benchdnn/inputs/matmul/test_matmul_gpu
/usr/share/benchdnn/inputs/matmul/test_matmul_int8
/usr/share/benchdnn/inputs/matmul/test_matmul_multidims
/usr/share/benchdnn/inputs/matmul/test_matmul_smoke
/usr/share/benchdnn/inputs/matmul/test_matmul_sparse
/usr/share/benchdnn/inputs/matmul/test_matmul_sparse_ci
/usr/share/benchdnn/inputs/pool
/usr/share/benchdnn/inputs/pool/harness_pooling_different_dt
/usr/share/benchdnn/inputs/pool/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/pool/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/pool/perf_pool_gpu
/usr/share/benchdnn/inputs/pool/set_all
/usr/share/benchdnn/inputs/pool/set_all_small
/usr/share/benchdnn/inputs/pool/set_topologies
/usr/share/benchdnn/inputs/pool/set_topologies_gpu
/usr/share/benchdnn/inputs/pool/shapes_1d
/usr/share/benchdnn/inputs/pool/shapes_2d
/usr/share/benchdnn/inputs/pool/shapes_2d_small
/usr/share/benchdnn/inputs/pool/shapes_3d
/usr/share/benchdnn/inputs/pool/shapes_3d_small
/usr/share/benchdnn/inputs/pool/shapes_3d_unet
/usr/share/benchdnn/inputs/pool/shapes_alexnet
/usr/share/benchdnn/inputs/pool/shapes_basic
/usr/share/benchdnn/inputs/pool/shapes_global_pooling
/usr/share/benchdnn/inputs/pool/shapes_googlenet_v1
/usr/share/benchdnn/inputs/pool/shapes_googlenet_v3
/usr/share/benchdnn/inputs/pool/shapes_i3d_resnet50_v1
/usr/share/benchdnn/inputs/pool/shapes_resnet_50
/usr/share/benchdnn/inputs/pool/test_pool_all
/usr/share/benchdnn/inputs/pool/test_pool_bfloat16
/usr/share/benchdnn/inputs/pool/test_pool_ci
/usr/share/benchdnn/inputs/pool/test_pool_float16
/usr/share/benchdnn/inputs/pool/test_pool_gpu
/usr/share/benchdnn/inputs/pool/test_pool_smoke
/usr/share/benchdnn/inputs/prelu
/usr/share/benchdnn/inputs/prelu/option_set_all
/usr/share/benchdnn/inputs/prelu/shapes_all
/usr/share/benchdnn/inputs/prelu/shapes_ci
/usr/share/benchdnn/inputs/prelu/test_prelu_all
/usr/share/benchdnn/inputs/prelu/test_prelu_bfloat16
/usr/share/benchdnn/inputs/prelu/test_prelu_ci
/usr/share/benchdnn/inputs/prelu/test_prelu_float16
/usr/share/benchdnn/inputs/prelu/test_prelu_gpu
/usr/share/benchdnn/inputs/prelu/test_prelu_smoke
/usr/share/benchdnn/inputs/reduction
/usr/share/benchdnn/inputs/reduction/harness_reduction_bf16
/usr/share/benchdnn/inputs/reduction/harness_reduction_f16
/usr/share/benchdnn/inputs/reduction/harness_reduction_f32
/usr/share/benchdnn/inputs/reduction/harness_reduction_i8
/usr/share/benchdnn/inputs/reduction/option_set_all
/usr/share/benchdnn/inputs/reduction/option_set_all_algs
/usr/share/benchdnn/inputs/reduction/option_set_all_algs_ci
/usr/share/benchdnn/inputs/reduction/option_set_all_algs_int8
/usr/share/benchdnn/inputs/reduction/option_set_all_algs_int8_ci
/usr/share/benchdnn/inputs/reduction/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/reduction/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/reduction/perf_reduction_gpu
/usr/share/benchdnn/inputs/reduction/shapes_ci
/usr/share/benchdnn/inputs/reduction/shapes_gpu_all
/usr/share/benchdnn/inputs/reduction/shapes_nested_gpu
/usr/share/benchdnn/inputs/reduction/test_reduction_all
/usr/share/benchdnn/inputs/reduction/test_reduction_bfloat16
/usr/share/benchdnn/inputs/reduction/test_reduction_ci
/usr/share/benchdnn/inputs/reduction/test_reduction_float16
/usr/share/benchdnn/inputs/reduction/test_reduction_gpu
/usr/share/benchdnn/inputs/reduction/test_reduction_smoke
/usr/share/benchdnn/inputs/reorder
/usr/share/benchdnn/inputs/reorder/harness_conv_reorders_gpu
/usr/share/benchdnn/inputs/reorder/harness_reorder_amx
/usr/share/benchdnn/inputs/reorder/harness_reorder_compensation
/usr/share/benchdnn/inputs/reorder/harness_reorder_cross_engine_gpu
/usr/share/benchdnn/inputs/reorder/harness_reorder_regression
/usr/share/benchdnn/inputs/reorder/harness_reorder_runtime
/usr/share/benchdnn/inputs/reorder/harness_reorder_saturation
/usr/share/benchdnn/inputs/reorder/harness_reorder_scales
/usr/share/benchdnn/inputs/reorder/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/reorder/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/reorder/test_reorder_all
/usr/share/benchdnn/inputs/reorder/test_reorder_bfloat16
/usr/share/benchdnn/inputs/reorder/test_reorder_ci
/usr/share/benchdnn/inputs/reorder/test_reorder_float16
/usr/share/benchdnn/inputs/reorder/test_reorder_gpu
/usr/share/benchdnn/inputs/reorder/test_reorder_smoke
/usr/share/benchdnn/inputs/resampling
/usr/share/benchdnn/inputs/resampling/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/resampling/set_all
/usr/share/benchdnn/inputs/resampling/shapes_1d
/usr/share/benchdnn/inputs/resampling/shapes_2d
/usr/share/benchdnn/inputs/resampling/shapes_3d
/usr/share/benchdnn/inputs/resampling/shapes_ci
/usr/share/benchdnn/inputs/resampling/shapes_maskrcnn
/usr/share/benchdnn/inputs/resampling/test_resampling_all
/usr/share/benchdnn/inputs/resampling/test_resampling_bfloat16
/usr/share/benchdnn/inputs/resampling/test_resampling_ci
/usr/share/benchdnn/inputs/resampling/test_resampling_float16
/usr/share/benchdnn/inputs/resampling/test_resampling_gpu
/usr/share/benchdnn/inputs/resampling/test_resampling_smoke
/usr/share/benchdnn/inputs/rnn
/usr/share/benchdnn/inputs/rnn/harness_augru_bf32
/usr/share/benchdnn/inputs/rnn/harness_augru_bfloat16
/usr/share/benchdnn/inputs/rnn/harness_gru_bf32
/usr/share/benchdnn/inputs/rnn/harness_gru_bfloat16
/usr/share/benchdnn/inputs/rnn/harness_gru_f32
/usr/share/benchdnn/inputs/rnn/harness_gru_int8
/usr/share/benchdnn/inputs/rnn/harness_gru_regression
/usr/share/benchdnn/inputs/rnn/harness_lstm_bf32
/usr/share/benchdnn/inputs/rnn/harness_lstm_bfloat16
/usr/share/benchdnn/inputs/rnn/harness_lstm_f32
/usr/share/benchdnn/inputs/rnn/harness_lstm_int8
/usr/share/benchdnn/inputs/rnn/harness_rnn_bf32
/usr/share/benchdnn/inputs/rnn/harness_rnn_bfloat16
/usr/share/benchdnn/inputs/rnn/harness_rnn_f32
/usr/share/benchdnn/inputs/rnn/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/rnn/option_set_gnmt_decoder
/usr/share/benchdnn/inputs/rnn/option_set_gnmt_encoder
/usr/share/benchdnn/inputs/rnn/option_set_large
/usr/share/benchdnn/inputs/rnn/option_set_lstmp_large
/usr/share/benchdnn/inputs/rnn/option_set_lstmp_small
/usr/share/benchdnn/inputs/rnn/option_set_perf_inference_lb
/usr/share/benchdnn/inputs/rnn/option_set_perf_inference_sb
/usr/share/benchdnn/inputs/rnn/option_set_perf_training
/usr/share/benchdnn/inputs/rnn/option_set_rnnt
/usr/share/benchdnn/inputs/rnn/option_set_small
/usr/share/benchdnn/inputs/rnn/perf_rnn_cpu
/usr/share/benchdnn/inputs/rnn/perf_rnn_gen9
/usr/share/benchdnn/inputs/rnn/perf_rnn_inference_lb
/usr/share/benchdnn/inputs/rnn/perf_rnn_inference_sb
/usr/share/benchdnn/inputs/rnn/perf_rnn_knx
/usr/share/benchdnn/inputs/rnn/perf_rnn_training
/usr/share/benchdnn/inputs/rnn/perf_rnn_xe_hp
/usr/share/benchdnn/inputs/rnn/perf_rnn_xe_lp
/usr/share/benchdnn/inputs/rnn/shapes_deepspeech_2
/usr/share/benchdnn/inputs/rnn/shapes_inference
/usr/share/benchdnn/inputs/rnn/shapes_large
/usr/share/benchdnn/inputs/rnn/shapes_large_gru
/usr/share/benchdnn/inputs/rnn/shapes_lstmp_large
/usr/share/benchdnn/inputs/rnn/shapes_lstmp_small
/usr/share/benchdnn/inputs/rnn/shapes_rnn_t
/usr/share/benchdnn/inputs/rnn/shapes_small
/usr/share/benchdnn/inputs/rnn/shapes_small_gru
/usr/share/benchdnn/inputs/rnn/shapes_training
/usr/share/benchdnn/inputs/rnn/test_augru_all
/usr/share/benchdnn/inputs/rnn/test_augru_bf32_bfloat16
/usr/share/benchdnn/inputs/rnn/test_augru_bfloat16
/usr/share/benchdnn/inputs/rnn/test_augru_ci
/usr/share/benchdnn/inputs/rnn/test_gru_all
/usr/share/benchdnn/inputs/rnn/test_gru_bf32_bfloat16
/usr/share/benchdnn/inputs/rnn/test_gru_bfloat16
/usr/share/benchdnn/inputs/rnn/test_gru_ci
/usr/share/benchdnn/inputs/rnn/test_gru_int8
/usr/share/benchdnn/inputs/rnn/test_lstm_all
/usr/share/benchdnn/inputs/rnn/test_lstm_bf32_bfloat16
/usr/share/benchdnn/inputs/rnn/test_lstm_bfloat16
/usr/share/benchdnn/inputs/rnn/test_lstm_bfloat16_ymm
/usr/share/benchdnn/inputs/rnn/test_lstm_ci
/usr/share/benchdnn/inputs/rnn/test_lstm_int8
/usr/share/benchdnn/inputs/rnn/test_rnn_all
/usr/share/benchdnn/inputs/rnn/test_rnn_bf32_bfloat16
/usr/share/benchdnn/inputs/rnn/test_rnn_bfloat16
/usr/share/benchdnn/inputs/rnn/test_rnn_ci
/usr/share/benchdnn/inputs/rnn/test_rnn_gpu
/usr/share/benchdnn/inputs/self
/usr/share/benchdnn/inputs/self/test_self_ci
/usr/share/benchdnn/inputs/self/test_self_f32
/usr/share/benchdnn/inputs/self/test_self_smoke
/usr/share/benchdnn/inputs/shuffle
/usr/share/benchdnn/inputs/shuffle/option_set_all
/usr/share/benchdnn/inputs/shuffle/option_set_min
/usr/share/benchdnn/inputs/shuffle/option_set_perf
/usr/share/benchdnn/inputs/shuffle/perf_shuffle_cpu
/usr/share/benchdnn/inputs/shuffle/test_shuffle_all
/usr/share/benchdnn/inputs/shuffle/test_shuffle_bfloat16
/usr/share/benchdnn/inputs/shuffle/test_shuffle_ci
/usr/share/benchdnn/inputs/shuffle/test_shuffle_float16
/usr/share/benchdnn/inputs/shuffle/test_shuffle_gpu
/usr/share/benchdnn/inputs/shuffle/test_shuffle_smoke
/usr/share/benchdnn/inputs/softmax
/usr/share/benchdnn/inputs/softmax/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/softmax/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/softmax/set_0d
/usr/share/benchdnn/inputs/softmax/shapes_0d
/usr/share/benchdnn/inputs/softmax/shapes_2d
/usr/share/benchdnn/inputs/softmax/shapes_3d
/usr/share/benchdnn/inputs/softmax/shapes_ci
/usr/share/benchdnn/inputs/softmax/shapes_large
/usr/share/benchdnn/inputs/softmax/shapes_large_axis
/usr/share/benchdnn/inputs/softmax/shapes_nlp
/usr/share/benchdnn/inputs/softmax/test_softmax_acl
/usr/share/benchdnn/inputs/softmax/test_softmax_all
/usr/share/benchdnn/inputs/softmax/test_softmax_bfloat16
/usr/share/benchdnn/inputs/softmax/test_softmax_ci
/usr/share/benchdnn/inputs/softmax/test_softmax_float16
/usr/share/benchdnn/inputs/softmax/test_softmax_gpu
/usr/share/benchdnn/inputs/softmax/test_softmax_smoke
/usr/share/benchdnn/inputs/sum
/usr/share/benchdnn/inputs/sum/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/sum/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/sum/test_sum_all
/usr/share/benchdnn/inputs/sum/test_sum_bfloat16
/usr/share/benchdnn/inputs/sum/test_sum_ci
/usr/share/benchdnn/inputs/sum/test_sum_float16
/usr/share/benchdnn/inputs/sum/test_sum_gpu
/usr/share/benchdnn/inputs/sum/test_sum_smoke
/usr/share/benchdnn/inputs/zeropad
/usr/share/benchdnn/inputs/zeropad/option_set_fwks_ext_gpu
/usr/share/benchdnn/inputs/zeropad/option_set_fwks_key_gpu
/usr/share/benchdnn/inputs/zeropad/set_dim1_block_3d
/usr/share/benchdnn/inputs/zeropad/set_dim1dim2_block_2d
/usr/share/benchdnn/inputs/zeropad/set_dim1dim2_block_3d
/usr/share/benchdnn/inputs/zeropad/set_dim2_block_3d
/usr/share/benchdnn/inputs/zeropad/set_dim2dim3_block_4d
/usr/share/benchdnn/inputs/zeropad/shapes_dim1_block_3d
/usr/share/benchdnn/inputs/zeropad/shapes_dim1dim2_block_2d
/usr/share/benchdnn/inputs/zeropad/shapes_dim1dim2_block_3d
/usr/share/benchdnn/inputs/zeropad/shapes_dim2_block_3d
/usr/share/benchdnn/inputs/zeropad/shapes_dim2dim3_block_4d
/usr/share/benchdnn/inputs/zeropad/test_zeropad_ci
/usr/share/benchdnn/inputs/zeropad/test_zeropad_gpu


Generated by rpm2html 1.8.1

Fabrice Bellet, Wed Apr 24 00:23:58 2024