教你用YOLOv5实现多路摄像头实时目标检测功能
教你用YOLOv5实现多路摄像头实时目标检测功能

教你用YOLOv5实现多路摄像头实时目标检测功能

目录
  • 前言
  • 一、YOLOV5的强大之处
  • 二、YOLOV5部署多路摄像头的web应用
    • 1.多路摄像头读取
    • 2.模型封装
    • 3.Flask后端处理
    • 4.前端展示
  • 总结

前言

YOLOV5模型从发布到现在都是炙手可热的目标检测模型,被广泛运用于各大场景之中。因此,我们不光要知道如何进行yolov5模型的训练,而且还要知道怎么进行部署应用。在本篇博客中,我将利用yolov5模型简单的实现从摄像头端到web端的部署应用demo,为读者提供一些部署思路。

一、YOLOV5的强大之处

你与目标检测高手之差一个YOLOV5模型。YOLOV5可以说是现目前几乎将所有目标检测tricks运用于一身的模型了。在它身上能找到很多目前主流的数据增强、模型训练、模型后处理的方法,下面我们就简单总结一下yolov5所使用到的方法:

yolov5增加的功能:

yolov5训练和预测的tricks:

二、YOLOV5部署多路摄像头的web应用

 

1.多路摄像头读取

在此篇博客中,采用了yolov5源码的datasets.py代码中的LoadStreams类进行多路摄像头视频流的读取。因为,我们只会用到datasets.py中视频流读取的部分代码,所以,将其提取出来,新建一个camera.py文件,下面则是camera.py文件的代码部分:

  1. # coding:utf-8
  2. import os
  3. import cv2
  4. import glob
  5. import time
  6. import numpy as np
  7. from pathlib import Path
  8. from utils.datasets import letterbox
  9. from threading import Thread
  10. from utils.general import clean_str
  11.  
  12. img_formats = [‘bmp’, ‘jpg’, ‘jpeg’, ‘png’, ‘tif’, ‘tiff’, ‘dng’, ‘webp’] # acceptable image suffixes
  13. vid_formats = [‘mov’, ‘avi’, ‘mp4’, ‘mpg’, ‘mpeg’, ‘m4v’, ‘wmv’, ‘mkv’] # acceptable video suffixes
  14.  
  15. class LoadImages: # for inference
  16. def __init__(self, path, img_size=640, stride=32):
  17. p = str(Path(path).absolute()) # os-agnostic absolute path
  18. if ‘*’ in p:
  19. files = sorted(glob.glob(p, recursive=True)) # glob
  20. elif os.path.isdir(p):
  21. files = sorted(glob.glob(os.path.join(p, ‘*.*’))) # dir
  22. elif os.path.isfile(p):
  23. files = [p] # files
  24. else:
  25. raise Exception(f‘ERROR: {p} does not exist’)
  26.  
  27. images = [x for x in files if x.split(‘.’)[-1].lower() in img_formats]
  28. videos = [x for x in files if x.split(‘.’)[-1].lower() in vid_formats]
  29. ni, nv = len(images), len(videos)
  30.  
  31. self.img_size = img_size
  32. self.stride = stride
  33. self.files = images + videos
  34. self.nf = ni + nv # number of files
  35. self.video_flag = [False] * ni + [True] * nv
  36. self.mode = ‘image’
  37. if any(videos):
  38. self.new_video(videos[0]) # new video
  39. else:
  40. self.cap = None
  41. assert self.nf > 0, f‘No images or videos found in {p}. ‘ \
  42. f‘Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}’
  43.  
  44. def __iter__(self):
  45. self.count = 0
  46. return self
  47.  
  48. def __next__(self):
  49. if self.count == self.nf:
  50. raise StopIteration
  51. path = self.files[self.count]
  52.  
  53. if self.video_flag[self.count]:
  54. # Read video
  55. self.mode = ‘video’
  56. ret_val, img0 = self.cap.read()
  57. if not ret_val:
  58. self.count += 1
  59. self.cap.release()
  60. if self.count == self.nf: # last video
  61. raise StopIteration
  62. else:
  63. path = self.files[self.count]
  64. self.new_video(path)
  65. ret_val, img0 = self.cap.read()
  66.  
  67. self.frame += 1
  68. print(f‘video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ‘, end=)
  69.  
  70. else:
  71. # Read image
  72. self.count += 1
  73. img0 = cv2.imread(path) # BGR
  74. assert img0 is not None, ‘Image Not Found ‘ + path
  75. print(f‘image {self.count}/{self.nf} {path}: ‘, end=)
  76.  
  77. # Padded resize
  78. img = letterbox(img0, self.img_size, stride=self.stride)[0]
  79.  
  80. # Convert
  81. img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
  82. img = np.ascontiguousarray(img)
  83.  
  84. return path, img, img0, self.cap
  85.  
  86. def new_video(self, path):
  87. self.frame = 0
  88. self.cap = cv2.VideoCapture(path)
  89. self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
  90.  
  91. def __len__(self):
  92. return self.nf # number of files
  93.  
  94. class LoadWebcam: # for inference
  95. def __init__(self, pipe=‘0’, img_size=640, stride=32):
  96. self.img_size = img_size
  97. self.stride = stride
  98.  
  99. if pipe.isnumeric():
  100. pipe = eval(pipe) # local camera
  101. # pipe = ‘rtsp://192.168.1.64/1’ # IP camera
  102. # pipe = ‘rtsp://username:password@192.168.1.64/1’ # IP camera with login
  103. # pipe = ‘http://wmccpinetop.axiscam.net/mjpg/video.mjpg’ # IP golf camera
  104.  
  105. self.pipe = pipe
  106. self.cap = cv2.VideoCapture(pipe) # video capture object
  107. self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
  108.  
  109. def __iter__(self):
  110. self.count = 1
  111. return self
  112.  
  113. def __next__(self):
  114. self.count += 1
  115. if cv2.waitKey(1) == ord(‘q’): # q to quit
  116. self.cap.release()
  117. cv2.destroyAllWindows()
  118. raise StopIteration
  119.  
  120. # Read frame
  121. if self.pipe == 0: # local camera
  122. ret_val, img0 = self.cap.read()
  123. img0 = cv2.flip(img0, 1) # flip left-right
  124. else: # IP camera
  125. n = 0
  126. while True:
  127. n += 1
  128. self.cap.grab()
  129. if n % 30 == 0: # skip frames
  130. ret_val, img0 = self.cap.retrieve()
  131. if ret_val:
  132. break
  133.  
  134. # Print
  135. assert ret_val, f‘Camera Error {self.pipe}’
  136. img_path = ‘webcam.jpg’
  137. print(f‘webcam {self.count}: ‘, end=)
  138.  
  139. # Padded resize
  140. img = letterbox(img0, self.img_size, stride=self.stride)[0]
  141.  
  142. # Convert
  143. img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
  144. img = np.ascontiguousarray(img)
  145.  
  146. return img_path, img, img0, None
  147.  
  148. def __len__(self):
  149. return 0
  150.  
  151. class LoadStreams: # multiple IP or RTSP cameras
  152. def __init__(self, sources=‘streams.txt’, img_size=640, stride=32):
  153. self.mode = ‘stream’
  154. self.img_size = img_size
  155. self.stride = stride
  156.  
  157. if os.path.isfile(sources):
  158. with open(sources, ‘r’) as f:
  159. sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]
  160. else:
  161. sources = [sources]
  162.  
  163. n = len(sources)
  164. self.imgs = [None] * n
  165. self.sources = [clean_str(x) for x in sources] # clean source names for later
  166. for i, s in enumerate(sources):
  167. # Start the thread to read frames from the video stream
  168. print(f‘{i + 1}/{n}: {s}… ‘, end=)
  169. cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s)
  170. assert cap.isOpened(), f‘Failed to open {s}’
  171. w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
  172. h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
  173. fps = cap.get(cv2.CAP_PROP_FPS) % 100
  174. _, self.imgs[i] = cap.read() # guarantee first frame
  175. thread = Thread(target=self.update, args=([i, cap]), daemon=True)
  176. print(f‘ success ({w}x{h} at {fps:.2f} FPS).’)
  177. thread.start()
  178. print() # newline
  179.  
  180. # check for common shapes
  181. s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes
  182. self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
  183. if not self.rect:
  184. print(‘WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.’)
  185.  
  186. def update(self, index, cap):
  187. # Read next stream frame in a daemon thread
  188. n = 0
  189. while cap.isOpened():
  190. n += 1
  191. # _, self.imgs[index] = cap.read()
  192. cap.grab()
  193. if n == 4: # read every 4th frame
  194. success, im = cap.retrieve()
  195. self.imgs[index] = im if success else self.imgs[index] * 0
  196. n = 0
  197. time.sleep(0.01) # wait time
  198.  
  199. def __iter__(self):
  200. self.count = 1
  201. return self
  202.  
  203. def __next__(self):
  204. self.count += 1
  205. img0 = self.imgs.copy()
  206. if cv2.waitKey(1) == ord(‘q’): # q to quit
  207. cv2.destroyAllWindows()
  208. raise StopIteration
  209.  
  210. # Letterbox
  211. img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0]
  212.  
  213. # Stack
  214. img = np.stack(img, 0)
  215.  
  216. # Convert
  217. img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416
  218. img = np.ascontiguousarray(img)
  219.  
  220. return self.sources, img, img0, None
  221.  
  222. def __len__(self):
  223. return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years

2.模型封装

接下来,我们借助detect.py文件对yolov5模型进行接口封装,使其提供模型推理能力。新建一个yolov5.py文件,构建一个名为darknet的类,使用函数detect,提供目标检测能力。其代码如下:

  1. # coding:utf-8
  2. import cv2
  3. import json
  4. import time
  5. import torch
  6. import numpy as np
  7. from camera import LoadStreams, LoadImages
  8. from utils.torch_utils import select_device
  9. from models.experimental import attempt_load
  10. from utils.general import non_max_suppression, scale_coords, letterbox, check_imshow
  11.  
  12. class Darknet(object):
  13. “””docstring for Darknet”””
  14. def __init__(self, opt):
  15. self.opt = opt
  16. self.device = select_device(self.opt[“device”])
  17. self.half = self.device.type != ‘cpu’ # half precision only supported on CUDA
  18. self.model = attempt_load(self.opt[“weights”], map_location=self.device)
  19. self.stride = int(self.model.stride.max())
  20. self.model.to(self.device).eval()
  21. self.names = self.model.module.names if hasattr(self.model, ‘module’) else self.model.names
  22. if self.half: self.model.half()
  23. self.source = self.opt[“source”]
  24. self.webcam = self.source.isnumeric() or self.source.endswith(‘.txt’) or self.source.lower().startswith(
  25. (‘rtsp://’, ‘rtmp://’, ‘http://’))
  26.  
  27. def preprocess(self, img):
  28. img = np.ascontiguousarray(img)
  29. img = torch.from_numpy(img).to(self.device)
  30. img = img.half() if self.half else img.float() # uint8 to fp16/32
  31. img /= 255.0 # 图像归一化
  32. if img.ndimension() == 3:
  33. img = img.unsqueeze(0)
  34. return img
  35.  
  36. def detect(self, dataset):
  37. view_img = check_imshow()
  38. t0 = time.time()
  39. for path, img, img0s, vid_cap in dataset:
  40. img = self.preprocess(img)
  41.  
  42. t1 = time.time()
  43. pred = self.model(img, augment=self.opt[“augment”])[0] # 0.22s
  44. pred = pred.float()
  45. pred = non_max_suppression(pred, self.opt[“conf_thres”], self.opt[“iou_thres”])
  46. t2 = time.time()
  47.  
  48. pred_boxes = []
  49. for i, det in enumerate(pred):
  50. if self.webcam: # batch_size >= 1
  51. p, s, im0, frame = path[i], ‘%g: ‘ % i, img0s[i].copy(), dataset.count
  52. else:
  53. p, s, im0, frame = path, , img0s, getattr(dataset, ‘frame’, 0)
  54. s += ‘%gx%g ‘ % img.shape[2:] # print string
  55. gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
  56. if det is not None and len(det):
  57. det[:, :4] = scale_coords(
  58. img.shape[2:], det[:, :4], im0.shape).round()
  59.  
  60. # Print results
  61. for c in det[:, 1].unique():
  62. n = (det[:, 1] == c).sum() # detections per class
  63. s += f“{n} {self.names[int(c)]}{‘s’ * (n > 1)}, “ # add to string
  64.  
  65. for *xyxy, conf, cls_id in det:
  66. lbl = self.names[int(cls_id)]
  67. xyxy = torch.tensor(xyxy).view(1, 4).view(-1).tolist()
  68. score = round(conf.tolist(), 3)
  69. label = “{}: {}”.format(lbl, score)
  70. x1, y1, x2, y2 = int(xyxy[0]), int(xyxy[1]), int(xyxy[2]), int(xyxy[3])
  71. pred_boxes.append((x1, y1, x2, y2, lbl, score))
  72. if view_img:
  73. self.plot_one_box(xyxy, im0, color=(255, 0, 0), label=label)
  74.  
  75. # Print time (inference + NMS)
  76. # print(pred_boxes)
  77. print(f‘{s}Done. ({t2 – t1:.3f}s)’)
  78.  
  79. if view_img:
  80. print(str(p))
  81. cv2.imshow(str(p), cv2.resize(im0, (800, 600)))
  82. if self.webcam:
  83. if cv2.waitKey(1) & 0xFF == ord(‘q’): break
  84. else:
  85. cv2.waitKey(0)
  86.  
  87. print(f‘Done. ({time.time() – t0:.3f}s)’)
  88. # print(‘[INFO] Inference time: {:.2f}s’.format(t3-t2))
  89. # return pred_boxes
  90.  
  91. # Plotting functions
  92. def plot_one_box(self, x, img, color=None, label=None, line_thickness=None):
  93. # Plots one bounding box on image img
  94. tl = line_thickness or round(0.001 * max(img.shape[0:2])) + 1 # line thickness
  95. color = color or [random.randint(0, 255) for _ in range(3)]
  96. c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
  97. cv2.rectangle(img, c1, c2, color, thickness=tl)
  98. if label:
  99. tf = max(tl 1, 1) # font thickness
  100. t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
  101. c2 = c1[0] + t_size[0], c1[1] t_size[1] 3
  102. cv2.rectangle(img, c1, c2, color, 1) # filled
  103. cv2.putText(img, label, (c1[0], c1[1] 2), 0, tl / 3, [0, 0, 0], thickness=tf, lineType=cv2.LINE_AA)
  104.  
  105. if __name__ == “__main__”:
  106. with open(‘yolov5_config.json’, ‘r’, encoding=‘utf8’) as fp:
  107. opt = json.load(fp)
  108. print(‘[INFO] YOLOv5 Config:’, opt)
  109. darknet = Darknet(opt)
  110. if darknet.webcam:
  111. # cudnn.benchmark = True # set True to speed up constant image size inference
  112. dataset = LoadStreams(darknet.source, img_size=opt[“imgsz”], stride=darknet.stride)
  113. else:
  114. dataset = LoadImages(darknet.source, img_size=opt[“imgsz”], stride=darknet.stride)
  115. darknet.detect(dataset)
  116. cv2.destroyAllWindows()

此外,还需要提供一个模型配置文件,我们使用json文件进行保存。新建一个yolov5_config.json文件,内容如下:

  1. {
  2. “source”: “streams.txt”, # 为视频图像文件地址
  3. “weights”: “runs/train/exp/weights/best.pt”, # 自己的模型地址
  4. “device”: “cpu”, # 使用的device类别,如是GPU,可填”0″
  5. “imgsz”: 640, # 输入图像的大小
  6. “stride”: 32, # 步长
  7. “conf_thres”: 0.35, # 置信值阈值
  8. “iou_thres”: 0.45, # iou阈值
  9. “augment”: false # 是否使用图像增强
  10. }

视频图像文件可以是单独的一张图像,如:”…/images/demo.jpg”,也可以是一个视频文件,如:”…/videos/demo.mp4″,也可以是一个视频流地址,如:“rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov”,还可以是一个txt文件,里面包含多个视频流地址,如:

  1. rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
  2. rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

– 有了如此配置信息,通过运行yolov5.py代码,我们能实现对视频文件(mp4、avi等)、视频流地址(http、rtsp、rtmp等)、图片(jpg、png)等视频图像文件进行目标检测推理的效果。

3.Flask后端处理

有了对模型封装的代码,我们就可以利用flask框架实时向前端推送算法处理之后的图像了。新建一个web_main.py文件:

  1. # import the necessary packages
  2. from yolov5 import Darknet
  3. from camera import LoadStreams, LoadImages
  4. from utils.general import non_max_suppression, scale_coords, letterbox, check_imshow
  5. from flask import Response
  6. from flask import Flask
  7. from flask import render_template
  8. import time
  9. import torch
  10. import json
  11. import cv2
  12. import os
  13.  
  14. # initialize a flask object
  15. app = Flask(__name__)
  16.  
  17. # initialize the video stream and allow the camera sensor to warmup
  18. with open(‘yolov5_config.json’, ‘r’, encoding=‘utf8’) as fp:
  19. opt = json.load(fp)
  20. print(‘[INFO] YOLOv5 Config:’, opt)
  21.  
  22. darknet = Darknet(opt)
  23. if darknet.webcam:
  24. # cudnn.benchmark = True # set True to speed up constant image size inference
  25. dataset = LoadStreams(darknet.source, img_size=opt[“imgsz”], stride=darknet.stride)
  26. else:
  27. dataset = LoadImages(darknet.source, img_size=opt[“imgsz”], stride=darknet.stride)
  28. time.sleep(2.0)
  29.  
  30. @app.route(“/”)
  31. def index():
  32. # return the rendered template
  33. return render_template(“index.html”)
  34.  
  35. def detect_gen(dataset, feed_type):
  36. view_img = check_imshow()
  37. t0 = time.time()
  38. for path, img, img0s, vid_cap in dataset:
  39. img = darknet.preprocess(img)
  40.  
  41. t1 = time.time()
  42. pred = darknet.model(img, augment=darknet.opt[“augment”])[0] # 0.22s
  43. pred = pred.float()
  44. pred = non_max_suppression(pred, darknet.opt[“conf_thres”], darknet.opt[“iou_thres”])
  45. t2 = time.time()
  46.  
  47. pred_boxes = []
  48. for i, det in enumerate(pred):
  49. if darknet.webcam: # batch_size >= 1
  50. feed_type_curr, p, s, im0, frame = “Camera_%s” % str(i), path[i], ‘%g: ‘ % i, img0s[i].copy(), dataset.count
  51. else:
  52. feed_type_curr, p, s, im0, frame = “Camera”, path, , img0s, getattr(dataset, ‘frame’, 0)
  53.  
  54. s += ‘%gx%g ‘ % img.shape[2:] # print string
  55. gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
  56. if det is not None and len(det):
  57. det[:, :4] = scale_coords(
  58. img.shape[2:], det[:, :4], im0.shape).round()
  59.  
  60. # Print results
  61. for c in det[:, 1].unique():
  62. n = (det[:, 1] == c).sum() # detections per class
  63. s += f“{n} {darknet.names[int(c)]}{‘s’ * (n > 1)}, “ # add to string
  64.  
  65. for *xyxy, conf, cls_id in det:
  66. lbl = darknet.names[int(cls_id)]
  67. xyxy = torch.tensor(xyxy).view(1, 4).view(-1).tolist()
  68. score = round(conf.tolist(), 3)
  69. label = “{}: {}”.format(lbl, score)
  70. x1, y1, x2, y2 = int(xyxy[0]), int(xyxy[1]), int(xyxy[2]), int(xyxy[3])
  71. pred_boxes.append((x1, y1, x2, y2, lbl, score))
  72. if view_img:
  73. darknet.plot_one_box(xyxy, im0, color=(255, 0, 0), label=label)
  74.  
  75. # Print time (inference + NMS)
  76. # print(pred_boxes)
  77. print(f‘{s}Done. ({t2 – t1:.3f}s)’)
  78. if feed_type_curr == feed_type:
  79. frame = cv2.imencode(‘.jpg’, im0)[1].tobytes()
  80. yield (b‘–frame\r\n’ b‘Content-Type: image/jpeg\r\n\r\n’ + frame + b‘\r\n’)
  81.  
  82. @app.route(‘/video_feed/<feed_type>’)
  83. def video_feed(feed_type):
  84. “””Video streaming route. Put this in the src attribute of an img tag.”””
  85. if feed_type == ‘Camera_0’:
  86. return Response(detect_gen(dataset=dataset, feed_type=feed_type),
  87. mimetype=‘multipart/x-mixed-replace; boundary=frame’)
  88.  
  89. elif feed_type == ‘Camera_1’:
  90. return Response(detect_gen(dataset=dataset, feed_type=feed_type),
  91. mimetype=‘multipart/x-mixed-replace; boundary=frame’)
  92.  
  93. if __name__ == ‘__main__’:
  94. app.run(host=‘0.0.0.0’, port=“5000”, threaded=True)

通过detect_gen函数将多个视频流地址推理后的图像按照feed_type类型,通过video_feed视频流路由进行传送到前端。

4.前端展示

最后,我们写一个简单的前端代码。首先新建一个templates文件夹,再在此文件夹中新建一个index.html文件,将下面h5代码写入其中:

  1. <html>
  2. <head>
  3. <style>
  4. * {
  5. boxsizing: borderbox;
  6. textalign: center;
  7. }
  8.  
  9. .imgcontainer {
  10. float: left;
  11. width: 30%;
  12. padding: 5px;
  13. }
  14.  
  15. .clearfix::after {
  16. content: “”;
  17. clear: both;
  18. display: table;
  19. }
  20. .clearfix{
  21. marginleft: 500px;
  22. }
  23. </style>
  24. </head>
  25. <body>
  26. <h1>Multi-camera with YOLOv5</h1>
  27. <div class=“clearfix”>
  28. <div class=“img-container” align=“center”>
  29. <p align=“center”>Live stream 1</p>
  30. <img src=“{{ url_for(‘video_feed’, feed_type=’Camera_0′) }}” class=“center” style=border:1px solid black;width:100% alt=“Live Stream 1”>
  31. </div>
  32. <div class=“img-container” align=“center”>
  33. <p align=“center”>Live stream 2</p>
  34. <img src=“{{ url_for(‘video_feed’, feed_type=’Camera_1′) }}” class=“center” style=border:1px solid black;width:100% alt=“Live Stream 2”>
  35. </div>
  36. </div>
  37. </body>
  38. </html>

至此,我们利用YOLOv5模型实现多路摄像头实时推理代码就写完了,下面我们开始运行:

– 在终端中进行跟目录下,直接运行:

  1. python web_main.py

然后,会在终端中出现如下信息:

  1. [INFO] YOLOv5 Config: {‘source’: ‘streams.txt’, ‘weights’: ‘runs/train/exp/weights/best.pt’, ‘device’: ‘cpu’, ‘imgsz’: 640, ‘stride’: 32, ‘conf_thres’: 0.35, ‘iou_thres’: 0.45, ‘augment’: False}
  2. Fusing layers
  3. 1/2: rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov… success (240×160 at 24.00 FPS).
  4. 2/2: rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov… success (240×160 at 24.00 FPS).
  5.  
  6. * Serving Flask app “web_main” (lazy loading)
  7. * Environment: production
  8. WARNING: This is a development server. Do not use it in a production deployment.
  9. Use a production WSGI server instead.
  10. * Debug mode: off
  11. * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

* 接着打开浏览器,输入localhost:5000后,终端没有报任何错误,则就会出现如下页面:

总结

1. 由于没有额外的视频流rtmp/rtsp文件地址,所以就找了一个公开的视频流地址,但是没有办法看到检测效果;

2. 部署的时候,只能使用视频流地址进行推理,且可以为多个视频流地址,保存为stream.txt,用yolov5_config.json导入;

3. 此demo版本为简易版的端到端模型部署方案,还可以根据场景需要添加更多功能。

到此这篇关于用YOLOv5实现多路摄像头实时目标检测功能的文章就介绍到这了,更多相关YOLOv5多路摄像头实时目标检测内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注