OpenCV的Graph API(或称G-API)是一个新模块,旨在使常规图像处理变得更快和轻量。这两种方式的是通过引入一个新的基于图像的执行模型实现的(graph-based model of execution)。G-API是一个特殊的OpenCV模块。与大多数其他主要模块相比,这个模块充当框架而不是某些特定的CV算法。G-API提供了定义CV操作,使用表达式构建图形,最后为特定后端实现和运行操作的方法¹。
例如,您可以使用G-API来构建一个面部分析管道,该管道包括图像采集和解码、预处理、检测、分类和可视化等步骤。G-API负责管道本身,因此如果算法或平台发生更改,执行模型会自动适应它²。
一个简化的面部分析管道包括以下步骤:图像采集和解码、预处理、检测、分类和可视化。构建G-API图形与常规使用G-API类似,仍然是定义图形数据(使用cv::GMat,cv::GScalar和cv::GArray)并对其进行操作。推理也成为图形中的一种操作,但定义方式略有不同。
下面是一个使用G-API构建面部分析管道的示例代码:
#include <opencv2/gapi.hpp>
#include <opencv2/gapi/core.hpp>
#include <opencv2/gapi/imgproc.hpp>
#include <opencv2/gapi/infer.hpp>
#include <opencv2/gapi/infer/ie.hpp>
#include <opencv2/gapi/cpu/gcpukernel.hpp>
#include <opencv2/gapi/streaming/cap.hpp>
// 定义网络
G_API_NET(FaceDetector, <cv::GMat(cv::GMat)>, "face-detector");
G_API_NET(AgeGender, <cv::GMat(cv::GMat)>, "age-gender");
G_API_NET(Emotions, <cv::GMat(cv::GMat)>, "emotions");
// 定义自定义绘制函数
void drawFace(cv::Mat &m, const cv::Rect &rc)
{
cv::rectangle(m, rc, {0,255,0}, 2);
}
void drawAgeGender(cv::Mat &m, const cv::Rect &rc,
const int age, const int gender)
{
std::ostringstream ss;
ss << (gender == 0 ? "M" : "F") << ": " << age;
auto label = ss.str();
int baseLine = 0;
auto size = cv::getTextSize(label, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
cv::rectangle(m, rc.tl() + cv::Point{0,-size.height-baseLine},
rc.tl() + cv::Point{size.width,size.height}, {0,255,0}, -1);
cv::putText(m, label, rc.tl() + cv::Point{0,-baseLine}, cv::FONT_HERSHEY_SIMPLEX,
0.5, {0,0,0}, 1);
}
void drawEmotion(cv::Mat &m, const cv::Rect &rc,
const std::string &emotion)
{
int baseLine = 0;
auto size = cv::getTextSize(emotion, cv::FONT_HERSHEY_SIMPLEX,
0.5, 1,&baseLine);
cv::rectangle(m,
rc.br() + cv::Point{-size.width,size.height+baseLine},
rc.br() + cv::Point{0,-baseLine},
{0,255,0}, -1);
cv::putText(m, emotion, rc.br() + cv::Point{-size.width,-baseLine},
cv::FONT_HERSHEY_SIMPLEX, 0.5, {0,0,0}, 1);
}
// 定义自定义操作
G_API_OP(PostProc, <cv::GMat(cv::GMat,cv::GArray<cv::Rect>,
cv::GArray<int>,cv::GArray<int>,
cv::GArray<std::string>)>,
"sample.custom.post_proc")
{
static cv::GMatDesc outMeta(const cv::GMatDesc &in,
const cv::GArrayDesc&,
const cv::GArrayDesc&,
const cv::GArrayDesc&,
const cv::GArrayDesc&)
{
return in;
}
};
// 实现自定义操作
GAPI_OCV_KERNEL(OCVPostProc, PostProc)
{
static void run(const cv::Mat &in,
const std::vector<cv::Rect> &faces,
const std::vector<int> &ages,
const std::vector<int> &genders,
const std::vector<std::string> &emotions,
cv::Mat &out)
{
out = in.clone();
for (std::size_t i = 0; i < faces.size(); i++) {
drawFace(out, faces[i]);
drawAgeGender(out, faces[i], ages[i], genders[i]);
drawEmotion(out, faces[i], emotions[i]);
}
}
};
int main(int argc, char *argv[])
{
// 命令行参数
const std::string input = argc > 1 ? argv[1] : "";
const std::string fdPath = argc > 2 ? argv[2] : "";
const std::string agPath = argc > 3 ? argv[3] : "";
const std::string emPath = argc > 4 ? argv[4] : "";
// 加载网络
auto faceNet = cv::gapi::ie::Params<FaceDetector> {
fdPath, // 路径到模型文件
fileNameNoExt(fdPath) + ".bin", // 路径到权重文件
"CPU" // 设备名称
};
auto ageGenderNet = cv::gapi::ie::Params<AgeGender> {
agPath,
fileNameNoExt(agPath) + ".bin",
"CPU"
};
auto emotionsNet = cv::gapi::ie::Params<Emotions> {
emPath,
fileNameNoExt(emPath) + ".bin",
"CPU"
};
// 构建管道
cv::GMat in;
cv::GArray<cv::Rect> faces = cv::gapi::infer<FaceDetector>(in);
cv::GArray<cv::GMat> ages;
cv::GArray<cv::GMat> genders;
std::tie(ages, genders) = cv::gapi::infer<AgeGender>(faces, in);
cv::GArray<cv::GMat> emotions = cv::gapi::infer<Emotions>(faces, in);
// 自定义后处理
cv::GMat out = PostProc::on(in, faces,
custom::ExtractTopClass::on(ages),
custom::ExtractTopClass::on(genders),
custom::ExtractTopClass::on(emotions));
// 编译管道
auto pipeline = cv::GComputation(cv::GIn(in), cv::GOut(out))
.compileStreaming(cv::compile_args(faceNet, ageGenderNet,
emotionsNet,
cv::gapi::kernels<OCVPostProc>()));
// 设置输入
try {
pipeline.setSource(cv::gin(cv::cap::open(input)));
} catch (...) {
std::cerr << "Can't open source: " << input << std::endl;
return -1;
}
// 运行管道
pipeline.start();
cv::Mat out_frame;
while (pipeline.pull(cv::gout(out_frame))) {
cv::imshow("Out", out_frame);
cv::waitKey(1);
}
return 0;
}
上面的代码展示了如何使用OpenCV G-API构建一个面部分析管道。您可以在OpenCV官方文档中找到更多有关如何使用G-API构建图像处理管道的信息。
(1) OpenCV: Graph API. https://docs.opencv.org/4.x/d0/d1e/gapi.html.
(2) OpenCV: Face analytics pipeline with G-API. https://docs.opencv.org/4.x/d8/d24/tutorial_gapi_interactive_face_detection.html.
(3) OpenCV Graph API初体验 - 简书. https://www.jianshu.com/p/8c8c08496a2c.
评论