在性能关键的代码中,我得到了两个大型矩阵(以千为单位)。
期望、实现
大小相同但包含不同值的。这些矩阵都是以相同的方式在列上划分的,每个子矩阵都有不同的列数。就像这样
submat1 submat2 submat3
-----------------------------
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
-----------------------------我需要最快的方法(用伪码)来填充第三个矩阵
for each submatrix
for each row in submatrix
pos= argmax(expectations(row,start_submatrix(col):end_submatrix(col)))
result(row,col) = realization(row,pos)也就是说,对于每个子矩阵,我扫描每一行,找出期望子矩阵中最大元素的位置,并将实现矩阵的对应值放入结果矩阵中。
我希望有最快的方法来实现这一点,也许可以通过智能并行/缓存优化来实现,因为这个函数是我花费大约40%的时间在算法中的地方。我使用VisualStudio15.9.6和Windows10。
这是我的参考C++实现,在这里我使用Armadillo (列主要)矩阵。
#include <iostream>
#include <chrono>
#include <vector>
///Trivial implementation, for illustration purposes
void find_max_vertical_trivial(const arma::mat& expectations, const arma::mat& realizations, arma::mat& results, const arma::uvec & list, const int max_size_action)
{
const int number_columns_results = results.n_cols;
const int number_rows = expectations.n_rows;
#pragma omp parallel for schedule(static)
for (int submatrix_to_process = 0; submatrix_to_process < number_columns_results; submatrix_to_process++)
{
const int start_loop = submatrix_to_process * max_size_action;
//Looping over rows
for (int current_row = 0; current_row < number_rows; current_row++)
{
int candidate = start_loop;
const int end_loop = candidate + list(submatrix_to_process);
//Finding the optimal action
for (int act = candidate + 1; act < end_loop; act++)
{
if (expectations(current_row, act) > expectations(current_row, candidate))
candidate = act;
}
//Placing the corresponding realization into the results
results(current_row, submatrix_to_process) = realizations(current_row, candidate);
}
}
}这是我想出的最快的方法。有可能改进吗?
///Stripped all armadillo functionality, to bare C
void find_max_vertical_optimized(const arma::mat& expectations, const arma::mat& realizations, arma::mat& values, const arma::uvec & list, const int max_block)
{
const int n_columns = values.n_cols;
const int number_rows = expectations.n_rows;
const auto exp_ptr = expectations.memptr();
const auto real_ptr = realizations.memptr();
const auto values_ptr = values.memptr();
const auto list_ptr = list.memptr();
#pragma omp parallel for schedule(static)
for (int col_position = 0; col_position < n_columns; col_position++)
{
const int start_loop = col_position * max_block*number_rows;
const int end_loop = start_loop + list_ptr[col_position]*number_rows;
const int position_value = col_position * number_rows;
for (int row_position = 0; row_position < number_rows; row_position++)
{
int candidate = start_loop;
const auto st_exp = exp_ptr + row_position;
const auto st_real = real_ptr + row_position;
const auto st_val = values_ptr + row_position;
for (int new_candidate = candidate + number_rows; new_candidate < end_loop; new_candidate+= number_rows)
{
if (st_exp[new_candidate] > st_exp[candidate])
candidate = new_candidate;
}
st_val[position_value] = st_real[candidate];
}
}
}以及测试部分,在这里我比较了性能
typedef std::chrono::microseconds dur;
const double dur2seconds = 1e6;
//Testing the two methods
int main()
{
const int max_cols_submatrix = 6; //Typical size: 3-100
const int n_test = 500;
const int number_rows = 2000; //typical size: 1000-10000
std::vector<int> size_to_test = {4,10,40,100,1000,5000 }; //typical size: 10-5000
arma::vec time_test(n_test, arma::fill::zeros);
arma::vec time_trivial(n_test, arma::fill::zeros);
for (const auto &size_grid : size_to_test) {
arma::mat expectations(number_rows, max_cols_submatrix*size_grid, arma::fill::randn);
arma::mat realizations(number_rows, max_cols_submatrix*size_grid, arma::fill::randn);
arma::mat reference_values(number_rows, size_grid, arma::fill::zeros);
arma::mat optimized_values(number_rows, size_grid, arma::fill::zeros);
arma::uvec number_columns_per_submatrix(size_grid);
//Generate random number of columns per each submatrices
number_columns_per_submatrix= arma::conv_to<arma::uvec>::from(arma::vec(size_grid,arma::fill::randu)*max_cols_submatrix);
for (int i = 0; i < n_test; i++) {
auto st_meas = std::chrono::high_resolution_clock::now();
find_max_vertical_trivial(expectations, realizations, optimized_values, number_columns_per_submatrix, max_cols_submatrix);
time_trivial(i) = std::chrono::duration_cast<dur>(std::chrono::high_resolution_clock::now() - st_meas).count() / dur2seconds;;
st_meas = std::chrono::high_resolution_clock::now();
find_max_vertical_optimized(expectations, realizations, reference_values, number_columns_per_submatrix, max_cols_submatrix);
time_test(i) = std::chrono::duration_cast<dur>(std::chrono::high_resolution_clock::now() - st_meas).count() / dur2seconds;
const auto diff = arma::sum(arma::sum(arma::abs(reference_values - optimized_values)));
if (diff > 1e-3)
{
std::cout <<"Error: " <<diff << "\n";
throw std::runtime_error("Error");
}
}
std::cout <<"grid size:"<< size_grid << "\n";
const double mean_time_trivial = arma::mean(time_trivial);
const double mean_time_opt = arma::mean(time_test);
std::cout << "Trivial: "<< mean_time_trivial << " s +/-" << 1.95*arma::stddev(time_trivial) / sqrt(n_test) <<"\n";
std::cout << "Optimized: "<< mean_time_opt <<" s ("<< (mean_time_opt/ mean_time_trivial-1)*100.0 <<" %) "<<"+/-" << 1.95*arma::stddev(time_test) / sqrt(n_test) << "\n";
}
}发布于 2019-02-27 04:39:25
您可能可以使用SIMD循环优化缓存,该循环可能读取8或12个完整的行向量,然后为下一列读取相同的行。(因此对于32位元素,8*4或8*8行并行)。您使用的是MSVC,它支持x86 SSE2 / AVX2本质,如_mm256_load_ps和_mm256_max_ps,或_mm256_max_epi32。
如果您从对齐边界开始,那么希望您能够读取您所接触的所有缓存行。然后输出中相同的访问模式。(因此,您正在读取2到6个连续的缓存行,在读/写块之间跨出一大步。)
或者可能将tmp结果记录在紧凑的某个地方(每行每段1值),然后再将更多的缓存写入每个列中相同元素的副本。但是,这两种方法都可以尝试;混合读写可能会让CPU更好地重叠工作,并发现更多的内存级并行性。
https://stackoverflow.com/questions/54851222
复制相似问题