现在的位置: 首页 > 自动控制 > 工业·编程 > 正文

Live555学习之(五):live555ProxyServer.cpp的学习

2019-08-26 14:12 工业·编程 ⁄ 共 43167字 ⁄ 字号 暂无评论

live555ProxyServer.cpp在live/proxyServer目录下,这个程序展示了如何利用live555来做一个代理服务器转发rtsp视频(例如,IPCamera的视频)。

首先来看一下main函数

1 int main(int argc, char** argv)

2 {

3   // Increase the maximum size of video frames that we can 'proxy' without truncation.

4   // (Such frames are unreasonably large; the back-end servers should really not be sending frames this large!)

5   OutPacketBuffer::maxSize = 300000; // bytes

6

7   // Begin by setting up our usage environment:

8   TaskScheduler* scheduler = BasicTaskScheduler::createNew();

9   env = BasicUsageEnvironment::createNew(*scheduler);

10

11   /*

12    .... 对各种输入参数的处理,在此略去

13   */

14  

15 // Create the RTSP server.  Try first with the default port number (554),

16   // and then with the alternative port number (8554):

17   RTSPServer* rtspServer;

18   portNumBits rtspServerPortNum = 554;

19   rtspServer = createRTSPServer(rtspServerPortNum);

20   if (rtspServer == NULL) {

21     rtspServerPortNum = 8554;

22     rtspServer = createRTSPServer(rtspServerPortNum);

23   }

24   if (rtspServer == NULL) {

25     *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";

26     exit(1);

27   }

28

29   // Create a proxy for each "rtsp://" URL specified on the command line:

30   for (i = 1; i < argc; ++i) {

31     char const* proxiedStreamURL = argv[i];

32     char streamName[30];

33     if (argc == 2) {

34       sprintf(streamName, "%s", "proxyStream"); // there's just one stream; give it this name

35     } else {

36       sprintf(streamName, "proxyStream-%d", i); // there's more than one stream; distinguish them by name

37     }

38     ServerMediaSession* sms

39       = ProxyServerMediaSession::createNew(*env, rtspServer,

40                        proxiedStreamURL, streamName,

41                        username, password, tunnelOverHTTPPortNum, verbosityLevel);

42     rtspServer->addServerMediaSession(sms);

43     // proxiedStreamURL是代理的源rtsp地址字符串,streamName表示代理后的ServerMediaSession的名字

44     char* proxyStreamURL = rtspServer->rtspURL(sms);

45     *env << "RTSP stream, proxying the stream \"" << proxiedStreamURL << "\"\n";

46     *env << "\tPlay this stream using the URL: " << proxyStreamURL << "\n";

47     delete[] proxyStreamURL;

48   }

49

50   if (proxyREGISTERRequests) {

51     *env << "(We handle incoming \"REGISTER\" requests on port " << rtspServerPortNum << ")\n";

52   }

53

54   // Also, attempt to create a HTTP server for RTSP-over-HTTP tunneling.

55   // Try first with the default HTTP port (80), and then with the alternative HTTP

56   // port numbers (8000 and 8080).

57

58   if (rtspServer->setUpTunnelingOverHTTP(80) || rtspServer->setUpTunnelingOverHTTP(8000) || rtspServer->setUpTunnelingOverHTTP(8080)) {

59     *env << "\n(We use port " << rtspServer->httpServerPortNum() << " for optional RTSP-over-HTTP tunneling.)\n";

60   } else {

61     *env << "\n(RTSP-over-HTTP tunneling is not available.)\n";

62   }

63

64   // Now, enter the event loop:

65   env->taskScheduler().doEventLoop(); // does not return

66

67   return 0; // only to prevent compiler warning

68 }

main函数还是很简单,第一行是设置OutPacketBuffer::maxSize的值,经过测试,我设置成300000个字节时就可以传送1080p的视频了。

然后还是创建TaskShcheduler和UsageEnvironment对象,中间是对各种输入参数的处理,在此我就省略不作分析了。

然后创建RTSPServer,根据输入的rtsp地址串创建ProxyServerMediaSession并添加到RTSPServer,然后开始程序的无限循环。

看一下ProxyServerMediaSession这个类

1 class ProxyServerMediaSession: public ServerMediaSession {

2 public:

3   static ProxyServerMediaSession* createNew(UsageEnvironment& env,

4                         RTSPServer* ourRTSPServer, // Note: We can be used by just one "RTSPServer"

5                         char const* inputStreamURL, // the "rtsp://" URL of the stream we'll be proxying

6                         char const* streamName = NULL,

7                         char const* username = NULL, char const* password = NULL,

8                         portNumBits tunnelOverHTTPPortNum = 0,

9                             // for streaming the *proxied* (i.e., back-end) stream

10                         int verbosityLevel = 0,

11                         int socketNumToServer = -1);

12  // Hack: "tunnelOverHTTPPortNum" == 0xFFFF (i.e., all-ones) means: Stream RTP/RTCP-over-TCP, but *not* using HTTP

13    // "verbosityLevel" == 1 means display basic proxy setup info; "verbosityLevel" == 2 means display RTSP client protocol also.

14 // If "socketNumToServer" >= 0,then it is the socket number of an already-existing TCP connection to the server.

15 //(In this case, "inputStreamURL" must point to the socket's endpoint, so that it can be accessed via the socket.)

16

17   virtual ~ProxyServerMediaSession();

18

19   char const* url() const;

20

21   char describeCompletedFlag;

22     // initialized to 0; set to 1 when the back-end "DESCRIBE" completes.

23     // (This can be used as a 'watch variable' in "doEventLoop()".)

24   Boolean describeCompletedSuccessfully() const { return fClientMediaSession != NULL; }

25     // This can be used - along with "describeCompletdFlag" - to check whether the back-end "DESCRIBE" completed *successfully*.

26

27 protected:

28   ProxyServerMediaSession(UsageEnvironment& env, RTSPServer* ourRTSPServer,

29               char const* inputStreamURL, char const* streamName,

30               char const* username, char const* password,

31               portNumBits tunnelOverHTTPPortNum, int verbosityLevel,

32               int socketNumToServer,

33               createNewProxyRTSPClientFunc* ourCreateNewProxyRTSPClientFunc

34               = defaultCreateNewProxyRTSPClientFunc);

35

36   // If you subclass "ProxyRTSPClient", then you will also need to define your own function

37   // - with signature "createNewProxyRTSPClientFunc" (see above) - that creates a new object

38   // of this subclass.  You should also subclass "ProxyServerMediaSession" and, in your

39   // subclass's constructor, initialize the parent class (i.e., "ProxyServerMediaSession")

40   // constructor by passing your new function as the "ourCreateNewProxyRTSPClientFunc"

41   // parameter.

42

43 protected:

44   RTSPServer* fOurRTSPServer;                  // 添加该ProxyServerMediaSession的RTSPServer对象

45   ProxyRTSPClient* fProxyRTSPClient; // 通过一个ProxyRTSPClient对象与给定rtsp服务器进行沟通

46   MediaSession* fClientMediaSession;           // 通过一个MediaSession对象去请求给定rtsp地址表示的媒体资源

47

48 private:

49   friend class ProxyRTSPClient;

50   friend class ProxyServerMediaSubsession;

51   void continueAfterDESCRIBE(char const* sdpDescription);

52   void resetDESCRIBEState(); // undoes what was done by "contineAfterDESCRIBE()"

53

54 private:

55   int fVerbosityLevel;

56   class PresentationTimeSessionNormalizer* fPresentationTimeSessionNormalizer;

57   createNewProxyRTSPClientFunc* fCreateNewProxyRTSPClientFunc;

58 };

ProxyServerMediaSession是ServerMediaSession的子类,它与普通的ServerMediaSession相比多了三个重要的成员变量:RTSPServer* fOurRTSPServer,ProxyRTSPClient* fProxyRTSPClient,MediaSession* fClientMediaSession。fOurRTSPServer保存添加该ProxyServerMediaSession的RTSPServer对象,fProxyRTSPClient保存该ProxyServerMediaSession对应的ProxyRTSPClient对象,fClientMediaSession保存该ProxyServerMediaSession对应的MediaSession对象。每个ProxyServerMediaSession对应一个ProxyRTSPClient对象和MediaSession对象,从这个地方可以看出,live555代理服务器同时作为RTSP服务器端和RTSP客户端,作为RTSP客户端去获取给定rtsp地址(比如IPCamera的rtsp地址)的媒体资源,然后作为RTSP服务器端转发给其他的RTSP客户端(比如VLC)。

ProxyRTSPClient是RTSPClient的子类,我们来看一下它的定义

1 // A subclass of "RTSPClient", used to refer to the particular "ProxyServerMediaSession" object being used.

2 // It is used only within the implementation of "ProxyServerMediaSession", but is defined here, in case developers wish to

3 // subclass it.

4

5 class ProxyRTSPClient: public RTSPClient {

6 public:

7   ProxyRTSPClient(class ProxyServerMediaSession& ourServerMediaSession, char const* rtspURL,

8                   char const* username, char const* password,

9                   portNumBits tunnelOverHTTPPortNum, int verbosityLevel, int socketNumToServer);

10   virtual ~ProxyRTSPClient();

11

12   void continueAfterDESCRIBE(char const* sdpDescription);   //包含了continueAfterDESCRIBE回调函数

13   void continueAfterLivenessCommand(int resultCode, Boolean serverSupportsGetParameter); //发送心跳命令后的回调函数

14   void continueAfterSETUP();                                //包含了continueAfterSETUP回调函数

15

16 private:

17   void reset();

18

19   Authenticator* auth() { return fOurAuthenticator; }

20

21   void scheduleLivenessCommand();                      // 设置何时执行发送心跳命令的任务

22   static void sendLivenessCommand(void* clientData);   // 发送心跳命令

23

24   void scheduleDESCRIBECommand(); // 设置何时执行发送DESCRIBE命令的任务

25   static void sendDESCRIBE(void* clientData); // 发送DESCRIBE命令

26

27   static void subsessionTimeout(void* clientData);

28   void handleSubsessionTimeout();

29

30 private:

31   friend class ProxyServerMediaSession;

32   friend class ProxyServerMediaSubsession;

33   ProxyServerMediaSession& fOurServerMediaSession;

34   char* fOurURL;

35   Authenticator* fOurAuthenticator;

36   Boolean fStreamRTPOverTCP;

37   class ProxyServerMediaSubsession *fSetupQueueHead, *fSetupQueueTail;

38   unsigned fNumSetupsDone;

39   unsigned fNextDESCRIBEDelay; // in seconds

40   Boolean fServerSupportsGetParameter, fLastCommandWasPLAY;

41   TaskToken fLivenessCommandTask, fDESCRIBECommandTask, fSubsessionTimerTask;

42 };

我们接下来看一下创建ProxyServerMediaSession对象的过程

1 ProxyServerMediaSession* ProxyServerMediaSession

2 ::createNew(UsageEnvironment& env, RTSPServer* ourRTSPServer,

3         char const* inputStreamURL, char const* streamName,

4         char const* username, char const* password,

5         portNumBits tunnelOverHTTPPortNum, int verbosityLevel, int socketNumToServer) {

6   return new ProxyServerMediaSession(env, ourRTSPServer, inputStreamURL, streamName, username, password,

7                      tunnelOverHTTPPortNum, verbosityLevel, socketNumToServer);

8 }

9

10

11 ProxyServerMediaSession

12 ::ProxyServerMediaSession(UsageEnvironment& env, RTSPServer* ourRTSPServer,

13               char const* inputStreamURL, char const* streamName,

14               char const* username, char const* password,

15               portNumBits tunnelOverHTTPPortNum, int verbosityLevel,

16               int socketNumToServer,

17               createNewProxyRTSPClientFunc* ourCreateNewProxyRTSPClientFunc)

18   : ServerMediaSession(env, streamName, NULL, NULL, False, NULL),

19     describeCompletedFlag(0), fOurRTSPServer(ourRTSPServer), fClientMediaSession(NULL),

20     fVerbosityLevel(verbosityLevel),

21     fPresentationTimeSessionNormalizer(new PresentationTimeSessionNormalizer(envir())),

22     fCreateNewProxyRTSPClientFunc(ourCreateNewProxyRTSPClientFunc) {

23   // Open a RTSP connection to the input stream, and send a "DESCRIBE" command.

24   // We'll use the SDP description in the response to set ourselves up.

25   fProxyRTSPClient

26     = (*fCreateNewProxyRTSPClientFunc)(*this, inputStreamURL, username, password,

27                        tunnelOverHTTPPortNum,

28                        verbosityLevel > 0 ? verbosityLevel-1 : verbosityLevel,

29                        socketNumToServer);

30   ProxyRTSPClient::sendDESCRIBE(fProxyRTSPClient);

31 }

在ProxyServerMediaSession中创建了ProxyRTSPClient对象,是通过fCreateNewProxyRTSPClientFunc函数来创建的,该函数默认是defaultCreateNewProxyRTSPClientFunc函数。

1 ProxyRTSPClient*

2 defaultCreateNewProxyRTSPClientFunc(ProxyServerMediaSession& ourServerMediaSession,

3                     char const* rtspURL,

4                     char const* username, char const* password,

5                     portNumBits tunnelOverHTTPPortNum, int verbosityLevel,

6                     int socketNumToServer) {

7   return new ProxyRTSPClient(ourServerMediaSession, rtspURL, username, password,

8                  tunnelOverHTTPPortNum, verbosityLevel, socketNumToServer);

9 }

然后就通过刚创建的ProxyRTSPClient对象发送DESCRIBE命令,请求获得媒体资源的SDP信息。

1 void ProxyRTSPClient::sendDESCRIBE(void* clientData) {

2   ProxyRTSPClient* rtspClient = (ProxyRTSPClient*)clientData;

3   if (rtspClient != NULL) rtspClient->sendDescribeCommand(::continueAfterDESCRIBE, rtspClient->auth());

4 }

5

6 void ProxyRTSPClient::continueAfterDESCRIBE(char const* sdpDescription) {

7   if (sdpDescription != NULL) {

8     fOurServerMediaSession.continueAfterDESCRIBE(sdpDescription);

9

10     // Unlike most RTSP streams, there might be a long delay between this "DESCRIBE" command (to the downstream server) and the

11     // subsequent "SETUP"/"PLAY" - which doesn't occur until the first time that a client requests the stream.

12     // To prevent the proxied connection (between us and the downstream server) from timing out, we send periodic 'liveness'

13     // ("OPTIONS" or "GET_PARAMETER") commands.  (The usual RTCP liveness mechanism wouldn't work here, because RTCP packets

14     // don't get sent until after the "PLAY" command.)

15     scheduleLivenessCommand();

16   } else {

17     // The "DESCRIBE" command failed, most likely because the server or the stream is not yet running.

18     // Reschedule another "DESCRIBE" command to take place later:

19     scheduleDESCRIBECommand();

20   }

21 }

22

23 void ProxyRTSPClient::scheduleLivenessCommand() {

24   // Delay a random time before sending another 'liveness' command.

25   unsigned delayMax = sessionTimeoutParameter(); // if the server specified a maximum time between 'liveness' probes, then use that

26   if (delayMax == 0) {

27     delayMax = 60;

28   }

29

30   // Choose a random time from [delayMax/2,delayMax-1) seconds:

31   unsigned const us_1stPart = delayMax*500000;

32   unsigned uSecondsToDelay;

33   if (us_1stPart <= 1000000) {

34     uSecondsToDelay = us_1stPart;

35   } else {

36     unsigned const us_2ndPart = us_1stPart-1000000;

37     uSecondsToDelay = us_1stPart + (us_2ndPart*our_random())%us_2ndPart;

38   }

39   fLivenessCommandTask = envir().taskScheduler().scheduleDelayedTask(uSecondsToDelay, sendLivenessCommand, this);

40 }

41

42 void ProxyRTSPClient::sendLivenessCommand(void* clientData) {

43   ProxyRTSPClient* rtspClient = (ProxyRTSPClient*)clientData;

44

45   // Note.  By default, we do not send "GET_PARAMETER" as our 'liveness notification' command, even if the server previously

46   // indicated (in its response to our earlier "OPTIONS" command) that it supported "GET_PARAMETER".  This is because

47   // "GET_PARAMETER" crashes some camera servers (even though they claimed to support "GET_PARAMETER").

48 #ifdef SEND_GET_PARAMETER_IF_SUPPORTED

49   MediaSession* sess = rtspClient->fOurServerMediaSession.fClientMediaSession;

50

51   if (rtspClient->fServerSupportsGetParameter && rtspClient->fNumSetupsDone > 0 && sess != NULL) {

52     rtspClient->sendGetParameterCommand(*sess, ::continueAfterGET_PARAMETER, "", rtspClient->auth());

53   } else {

54 #endif

55     rtspClient->sendOptionsCommand(::continueAfterOPTIONS, rtspClient->auth());

56 #ifdef SEND_GET_PARAMETER_IF_SUPPORTED

57   }

58 #endif

59 }

60

61 void ProxyRTSPClient::scheduleDESCRIBECommand() {

62   // Delay 1s, 2s, 4s, 8s ... 256s until sending the next "DESCRIBE".  Then, keep delaying a random time from [256..511] seconds:

63   unsigned secondsToDelay;

64   if (fNextDESCRIBEDelay <= 256) {

65     secondsToDelay = fNextDESCRIBEDelay;

66     fNextDESCRIBEDelay *= 2;

67   } else {

68     secondsToDelay = 256 + (our_random()&0xFF); // [256..511] seconds

69   }

70

71   if (fVerbosityLevel > 0) {

72     envir() << *this << ": RTSP \"DESCRIBE\" command failed; trying again in " << secondsToDelay << " seconds\n";

73   }

74   fDESCRIBECommandTask = envir().taskScheduler().scheduleDelayedTask(secondsToDelay*MILLION, sendDESCRIBE, this);

75 }

76

77 void ProxyRTSPClient::sendDESCRIBE(void* clientData) {

78   ProxyRTSPClient* rtspClient = (ProxyRTSPClient*)clientData;

79   if (rtspClient != NULL) rtspClient->sendDescribeCommand(::continueAfterDESCRIBE, rtspClient->auth());

80 }

发送DESCRIBE命令后,回调::continueAfterDESCRIBE函数(static void continueAfterDESCRIBE函数),在该函数中再调用ProxyServerMediaSession::continueAfterDESCRIBE函数,在ProxyServerMediaSession::continueAfterDESCRIBE函数中判断是否成功获取了SDP信息。若成功获取了,则调用ProxyServerMediaSession::continueAfterDESCRIBE,然后调用scheduleLivenessCommand函数设置发送心跳命令的任务;若没有成功获取则调用scheduleDESCRIBECommand函数设置重新发送DESCRIBE命令的任务。

ProxyRTSPClient使用GET_PARAMETER命令或者OPTIONS命令作为心跳命令,scheduleLivenessCommand函数中,从[delayMax / 2,delayMax - 1)中随机选取一个值作为发送下一个心跳命令的延时。scheduleDESCRIBECommand函数中,根据上次发送DESCRIBE命令的延时来计算下一次发送DESCRIBE命令的延时,若上次发送DESCRIBE命令的延时小于256s,则按照1,2,4,8,.....256这样一个等比数列来选择一个值作为发送下一个DESCRIBE命令的延时,否则就从[256,511]中随机选择一个值作为下次发送DESCRIBE命令的延时。

成功获取SDP信息后,调用ProxyServerMediaSession::continueAfterDESCRIBE函数:

1 void ProxyServerMediaSession::continueAfterDESCRIBE(char const* sdpDescription) {

2   describeCompletedFlag = 1;

3

4   // Create a (client) "MediaSession" object from the stream's SDP description ("resultString"), then iterate through its

5   // "MediaSubsession" objects, to set up corresponding "ServerMediaSubsession" objects that we'll use to serve the stream's tracks.

6   do {

7     fClientMediaSession = MediaSession::createNew(envir(), sdpDescription);

8     if (fClientMediaSession == NULL) break;

9

10     MediaSubsessionIterator iter(*fClientMediaSession);

11     for (MediaSubsession* mss = iter.next(); mss != NULL; mss = iter.next()) {

12       ServerMediaSubsession* smss = new ProxyServerMediaSubsession(*mss);

13       addSubsession(smss);

14       if (fVerbosityLevel > 0) {

15     envir() << *this << " added new \"ProxyServerMediaSubsession\" for "

16         << mss->protocolName() << "/" << mss->mediumName() << "/" << mss->codecName() << " track\n";

17       }

18     }

19   } while (0);

20 }

在continueAfterDESCRIBE函数中,首先创建了MediaSession对象,然后创建ProxyServerMediaSubsession对象并添加到ProxyServerMediaSession。ProxyServerMediaSubsession继承自OnDemandServerMediaSubsession类

1 class ProxyServerMediaSubsession: public OnDemandServerMediaSubsession {

2 public:

3   ProxyServerMediaSubsession(MediaSubsession& mediaSubsession);

4   virtual ~ProxyServerMediaSubsession();

5

6   char const* codecName() const { return fClientMediaSubsession.codecName(); }

7

8 private: // redefined virtual functions

9   virtual FramedSource* createNewStreamSource(unsigned clientSessionId,

10                                               unsigned& estBitrate);

11   virtual void closeStreamSource(FramedSource *inputSource);

12   virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock,

13                                     unsigned char rtpPayloadTypeIfDynamic,

14                                     FramedSource* inputSource);

15

16 private:

17   static void subsessionByeHandler(void* clientData);

18   void subsessionByeHandler();

19

20   int verbosityLevel() const { return ((ProxyServerMediaSession*)fParentSession)->fVerbosityLevel; }

21

22 private:

23   friend class ProxyRTSPClient;

24   MediaSubsession& fClientMediaSubsession; // the 'client' media subsession object that corresponds to this 'server' media subsession

25   ProxyServerMediaSubsession* fNext; // used when we're part of a queue

26   Boolean fHaveSetupStream;

27 };

ProxyServerMediaSubsession类中有一个MediaSubsession的引用,一个ProxyServerMediaSubsession对象对应一个MediaSubsession对象。ProxyServerMediaSubsession接下来并不会急着发送SETUP命令,而是等到有RTSP客户端(比如VLC)请求它时再发送SETUP命令去请求建立与IPCamera的连接。

然后,RTSPServer等待着RTSP客户端来请求,现在我们假设收到了来自VLC客户端的rtsp请求,然后流程就和前面《建立RTSP连接的过程(RTSP服务器端)》类似。下面我们简要来看一下这个流程,主要突出与之前不同的步骤,我们从RTSPServer::handleCmd_DESCRIBE函数看起:

hanleCmd_DESCRIBE函数处理来自客户端的DESCRIBE命令,调用ServerMediaSession::generateSDPDescription函数;

ServerMediaSession::generateSDPDescription函数中调用的是OnDemandServerMediaSubsession::sdpLines函数;

在sdpLines函数中,调用ProxyServerMediaSubsession::createNewStreamSource函数创建一个临时的FramedSource对象,调用ProxyServerMediaSubsession::createNewRTPSink创建临时的RTPSink对象,然后调用OnDemandServerMediaSubsession::setSDPLinesFromRTPSink函数。

  1 FramedSource* ProxyServerMediaSubsession::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)

    {  

  2    ProxyServerMediaSession* const sms = (ProxyServerMediaSession*)fParentSession;

  3

  4   if (verbosityLevel() > 0) {

  5     envir() << *this << "::createNewStreamSource(session id " << clientSessionId << ")\n";

  6   }

  7

  8   // If we haven't yet created a data source from our 'media subsession' object, initiate() it to do so:

  9   if (fClientMediaSubsession.readSource() == NULL) {

10     fClientMediaSubsession.receiveRawMP3ADUs(); // hack for MPA-ROBUST streams

11     fClientMediaSubsession.receiveRawJPEGFrames(); // hack for proxying JPEG/RTP streams. (Don't do this if we're transcoding.)

12     fClientMediaSubsession.initiate();        // 调用MediaSubsession的initiate函数,初始化MediaSubsession对象

13     if (verbosityLevel() > 0) {

14       envir() << "\tInitiated: " << *this << "\n";

15     }

16      // 在fReadSource前面添加PresentationTimeSessionNormalizer作为Filter

17     if (fClientMediaSubsession.readSource() != NULL) {

18       // Add to the front of all data sources a filter that will 'normalize' their frames' presentation times,

19       // before the frames get re-transmitted by our server:

20       char const* const codecName = fClientMediaSubsession.codecName();

21       FramedFilter* normalizerFilter = sms->fPresentationTimeSessionNormalizer

22     ->createNewPresentationTimeSubsessionNormalizer(fClientMediaSubsession.readSource(), fClientMediaSubsession.rtpSource(),codecName);

23

24       fClientMediaSubsession.addFilter(normalizerFilter);   // ProxyServerMediaSubsession的FramedSource以MediaSubsession的FramedSource作为媒体源

25

26       // Some data sources require a 'framer' object to be added, before they can be fed into

27       // a "RTPSink".  Adjust for this now:

28       if (strcmp(codecName, "H264") == 0) {  // 再在fReadSource前面添加H264VideoStreamDiscreteFramer作为Filter

29     fClientMediaSubsession.addFilter(H264VideoStreamDiscreteFramer

30                      ::createNew(envir(), fClientMediaSubsession.readSource()));

31       } else if (strcmp(codecName, "H265") == 0) {

32     fClientMediaSubsession.addFilter(H265VideoStreamDiscreteFramer

33                      ::createNew(envir(), fClientMediaSubsession.readSource()));

34       } else if (strcmp(codecName, "MP4V-ES") == 0) {

35     fClientMediaSubsession.addFilter(MPEG4VideoStreamDiscreteFramer

36                      ::createNew(envir(), fClientMediaSubsession.readSource(),

37                              True/* leave PTs unmodified*/));

38       } else if (strcmp(codecName, "MPV") == 0) {

39     fClientMediaSubsession.addFilter(MPEG1or2VideoStreamDiscreteFramer

40                      ::createNew(envir(), fClientMediaSubsession.readSource(),

41                              False, 5.0, True/* leave PTs unmodified*/));

42       } else if (strcmp(codecName, "DV") == 0) {

43     fClientMediaSubsession.addFilter(DVVideoStreamFramer

44                      ::createNew(envir(), fClientMediaSubsession.readSource(),

45                              False, True/* leave PTs unmodified*/));

46       }

47     }

48

49     if (fClientMediaSubsession.rtcpInstance() != NULL) {

50       fClientMediaSubsession.rtcpInstance()->setByeHandler(subsessionByeHandler, this);

51     }

52   }

53

54   ProxyRTSPClient* const proxyRTSPClient = sms->fProxyRTSPClient;

55   if (clientSessionId != 0) { //为了形成SDP信息而创建临时FramedSource时,传入的clientSessionID参数为0,就不会发送SETUP命令

56     // We're being called as a result of implementing a RTSP "SETUP".

57     if (!fHaveSetupStream) {

58       // This is our first "SETUP".  Send RTSP "SETUP" and later "PLAY" commands to the proxied server, to start streaming:

59       // (Before sending "SETUP", enqueue ourselves on the "RTSPClient"s 'SETUP queue', so we'll be able to get the correct

60       //  "ProxyServerMediaSubsession" to handle the response.  (Note that responses come back in the same order as requests.))

61       Boolean queueWasEmpty = proxyRTSPClient->fSetupQueueHead == NULL;

62       if (queueWasEmpty) {

63     proxyRTSPClient->fSetupQueueHead = this;

64       } else {

65     proxyRTSPClient->fSetupQueueTail->fNext = this;

66       }

67       proxyRTSPClient->fSetupQueueTail = this;

68

69       // Hack: If there's already a pending "SETUP" request (for another track), don't send this track's "SETUP" right away, because

70       // the server might not properly handle 'pipelined' requests.  Instead, wait until after previous "SETUP" responses come back.

71       if (queueWasEmpty) {   // 发送SETUP命令

72     proxyRTSPClient->sendSetupCommand(fClientMediaSubsession, ::continueAfterSETUP,

73                       False, proxyRTSPClient->fStreamRTPOverTCP, False, proxyRTSPClient->auth());

74     ++proxyRTSPClient->fNumSetupsDone;

75     fHaveSetupStream = True;

76       }

77     } else {

78       // This is a "SETUP" from a new client.  We know that there are no other currently active clients (otherwise we wouldn't

79       // have been called here), so we know that the substream was previously "PAUSE"d.  Send "PLAY" downstream once again,

80       // to resume the stream:

81       if (!proxyRTSPClient->fLastCommandWasPLAY) { // so that we send only one "PLAY"; not one for each subsession

82     proxyRTSPClient->sendPlayCommand(fClientMediaSubsession.parentSession(), NULL, -1.0f/*resume from previous point*/,

83                      -1.0f, 1.0f, proxyRTSPClient->auth());

84     proxyRTSPClient->fLastCommandWasPLAY = True;

85       }

86     }

87   }

88

89   estBitrate = fClientMediaSubsession.bandwidth();

90   if (estBitrate == 0) estBitrate = 50; // kbps, estimate

91   return fClientMediaSubsession.readSource();

92 }

93

94 RTPSink* ProxyServerMediaSubsession

95 ::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource) {

96   if (verbosityLevel() > 0) {

97     envir() << *this << "::createNewRTPSink()\n";

98   }

99

100   // Create (and return) the appropriate "RTPSink" object for our codec:

101   RTPSink* newSink;

102   char const* const codecName = fClientMediaSubsession.codecName();

103   if (strcmp(codecName, "AC3") == 0 || strcmp(codecName, "EAC3") == 0) {

104     newSink = AC3AudioRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

105                      fClientMediaSubsession.rtpTimestampFrequency());

106 #if 0 // This code does not work; do *not* enable it:

107   } else if (strcmp(codecName, "AMR") == 0 || strcmp(codecName, "AMR-WB") == 0) {

108     Boolean isWideband = strcmp(codecName, "AMR-WB") == 0;

109     newSink = AMRAudioRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

110                      isWideband, fClientMediaSubsession.numChannels());

111 #endif

112   } else if (strcmp(codecName, "DV") == 0) {

113     newSink = DVVideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);

114   } else if (strcmp(codecName, "GSM") == 0) {

115     newSink = GSMAudioRTPSink::createNew(envir(), rtpGroupsock);

116   } else if (strcmp(codecName, "H263-1998") == 0 || strcmp(codecName, "H263-2000") == 0) {

117     newSink = H263plusVideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

118                           fClientMediaSubsession.rtpTimestampFrequency());

119   } else if (strcmp(codecName, "H264") == 0) {  //创建H264VideoRTPSink对象

120     newSink = H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

121                       fClientMediaSubsession.fmtp_spropparametersets());

122   } else if (strcmp(codecName, "H265") == 0) {

123     newSink = H265VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

124                       fClientMediaSubsession.fmtp_spropvps(),

125                       fClientMediaSubsession.fmtp_spropsps(),

126                       fClientMediaSubsession.fmtp_sproppps());

127   } else if (strcmp(codecName, "JPEG") == 0) {

128     newSink = SimpleRTPSink::createNew(envir(), rtpGroupsock, 26, 90000, "video", "JPEG",

129                        1/*numChannels*/, False/*allowMultipleFramesPerPacket*/, False/*doNormalMBitRule*/);

130   } else if (strcmp(codecName, "MP4A-LATM") == 0) {

131     newSink = MPEG4LATMAudioRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

132                            fClientMediaSubsession.rtpTimestampFrequency(),

133                            fClientMediaSubsession.fmtp_config(),

134                            fClientMediaSubsession.numChannels());

135   } else if (strcmp(codecName, "MP4V-ES") == 0) {

136     newSink = MPEG4ESVideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

137                          fClientMediaSubsession.rtpTimestampFrequency(),

138                          fClientMediaSubsession.attrVal_unsigned("profile-level-id"),

139                          fClientMediaSubsession.fmtp_config());

140   } else if (strcmp(codecName, "MPA") == 0) {

141     newSink = MPEG1or2AudioRTPSink::createNew(envir(), rtpGroupsock);

142   } else if (strcmp(codecName, "MPA-ROBUST") == 0) {

143     newSink = MP3ADURTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);

144   } else if (strcmp(codecName, "MPEG4-GENERIC") == 0) {

145     newSink = MPEG4GenericRTPSink::createNew(envir(), rtpGroupsock,

146                          rtpPayloadTypeIfDynamic, fClientMediaSubsession.rtpTimestampFrequency(),

147                          fClientMediaSubsession.mediumName(),

148                          fClientMediaSubsession.attrVal_strToLower("mode"),

149                          fClientMediaSubsession.fmtp_config(), fClientMediaSubsession.numChannels());

150   } else if (strcmp(codecName, "MPV") == 0) {

151     newSink = MPEG1or2VideoRTPSink::createNew(envir(), rtpGroupsock);

152   } else if (strcmp(codecName, "OPUS") == 0) {

153     newSink = SimpleRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

154                        48000, "audio", "OPUS", 2, False/*only 1 Opus 'packet' in each RTP packet*/);

155   } else if (strcmp(codecName, "T140") == 0) {

156     newSink = T140TextRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);

157   } else if (strcmp(codecName, "THEORA") == 0) {

158     newSink = TheoraVideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

159                         fClientMediaSubsession.fmtp_config());

160   } else if (strcmp(codecName, "VORBIS") == 0) {

161     newSink = VorbisAudioRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic,

162                         fClientMediaSubsession.rtpTimestampFrequency(), fClientMediaSubsession.numChannels(),

163                         fClientMediaSubsession.fmtp_config());

164   } else if (strcmp(codecName, "VP8") == 0) {

165     newSink = VP8VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);

166   } else if (strcmp(codecName, "AMR") == 0 || strcmp(codecName, "AMR-WB") == 0) {

167     // Proxying of these codecs is currently *not* supported, because the data received by the "RTPSource" object is not in a

168     // form that can be fed directly into a corresponding "RTPSink" object.

169     if (verbosityLevel() > 0) {

170       envir() << "\treturns NULL (because we currently don't support the proxying of \""

171           << fClientMediaSubsession.mediumName() << "/" << codecName << "\" streams)\n";

172     }

173     return NULL;

174   } else if (strcmp(codecName, "QCELP") == 0 ||

175          strcmp(codecName, "H261") == 0 ||

176          strcmp(codecName, "H263-1998") == 0 || strcmp(codecName, "H263-2000") == 0 ||

177          strcmp(codecName, "X-QT") == 0 || strcmp(codecName, "X-QUICKTIME") == 0) {

178     // This codec requires a specialized RTP payload format; however, we don't yet have an appropriate "RTPSink" subclass for it:

179     if (verbosityLevel() > 0) {

180       envir() << "\treturns NULL (because we don't have a \"RTPSink\" subclass for this RTP payload format)\n";

181     }

182     return NULL;

183   } else {

184     // This codec is assumed to have a simple RTP payload format that can be implemented just with a "SimpleRTPSink":

185     Boolean allowMultipleFramesPerPacket = True; // by default

186     Boolean doNormalMBitRule = True; // by default

187     // Some codecs change the above default parameters:

188     if (strcmp(codecName, "MP2T") == 0) {

189       doNormalMBitRule = False; // no RTP 'M' bit

190     }

191     newSink = SimpleRTPSink::createNew(envir(), rtpGroupsock,

192                        rtpPayloadTypeIfDynamic, fClientMediaSubsession.rtpTimestampFrequency(),

193                        fClientMediaSubsession.mediumName(), fClientMediaSubsession.codecName(),

194                        fClientMediaSubsession.numChannels(), allowMultipleFramesPerPacket, doNormalMBitRule);

195   }

196

197   // Because our relayed frames' presentation times are inaccurate until the input frames have been RTCP-synchronized,

198   // we temporarily disable RTCP "SR" reports for this "RTPSink" object:

199   newSink->enableRTCPReports() = False;

200

201   // Also tell our "PresentationTimeSubsessionNormalizer" object about the "RTPSink", so it can enable RTCP "SR" reports later:

202   PresentationTimeSubsessionNormalizer* ssNormalizer;

203   if (strcmp(codecName, "H264") == 0 ||

204       strcmp(codecName, "H265") == 0 ||

205       strcmp(codecName, "MP4V-ES") == 0 ||

206       strcmp(codecName, "MPV") == 0 ||

207       strcmp(codecName, "DV") == 0) {

208     // There was a separate 'framer' object in front of the "PresentationTimeSubsessionNormalizer", so go back one object to get it:

209     ssNormalizer = (PresentationTimeSubsessionNormalizer*)(((FramedFilter*)inputSource)->inputSource());

210   } else {

211     ssNormalizer = (PresentationTimeSubsessionNormalizer*)inputSource;

212   }

213   ssNormalizer->setRTPSink(newSink);

214

215   return newSink;

216 }

在ProxyServerMediaSubsession::createNewStreamSource函数中,首先调用MediaSubsession::initiate函数进行初始化,然后添加两个Filter:PresentationTimeSessionNormalizer和H264VideoStreamDiscreteFramer。PresentationTimeSessionNormalizer我没有细致的去看,大概的作用应该是给帧打时间戳的,H264VideoStreamDiscreteFramer是用来从接收到的数据分离出每一帧数据。

在ProxyServerMediaSubsession::createNewRTPSink函数中,主要就是创建了一个H264VideoRTPSink对象。

执行完以上两个函数后,调用OnDemandServerMediaSubsession::setSDPLinesFromRTPSink函数;

在setSDPLinesFromRTPSink函数中,调用OnDemandServerMediaSubsession::getAuxSDPLine函数;

在getAuxSDPLine函数中,调用H264VideoRTPSink::auxSDPLine函数:

1 char const* H264VideoRTPSink::auxSDPLine() {

2   // Generate a new "a=fmtp:" line each time, using our SPS and PPS (if we have them),

3   // otherwise parameters from our framer source (in case they've changed since the last time that

4   // we were called):

5   H264or5VideoStreamFramer* framerSource = NULL;

6   u_int8_t* vpsDummy = NULL; unsigned vpsDummySize = 0;

7   u_int8_t* sps = fSPS; unsigned spsSize = fSPSSize;

8   u_int8_t* pps = fPPS; unsigned ppsSize = fPPSSize;

9   if (sps == NULL || pps == NULL) {

10     // We need to get SPS and PPS from our framer source:

11     if (fOurFragmenter == NULL) return NULL; // we don't yet have a fragmenter (and therefore not a source)

12     framerSource = (H264or5VideoStreamFramer*)(fOurFragmenter->inputSource());

13     if (framerSource == NULL) return NULL; // we don't yet have a source

14     //获取VPS、SPS以及PPS信息

15     framerSource->getVPSandSPSandPPS(vpsDummy, vpsDummySize, sps, spsSize, pps, ppsSize);

16     if (sps == NULL || pps == NULL) return NULL; // our source isn't ready

17   }

18

19   // Set up the "a=fmtp:" SDP line for this stream:

20   u_int8_t* spsWEB = new u_int8_t[spsSize]; // "WEB" means "Without Emulation Bytes"

21   unsigned spsWEBSize = removeH264or5EmulationBytes(spsWEB, spsSize, sps, spsSize);

22   if (spsWEBSize < 4) { // Bad SPS size => assume our source isn't ready

23     delete[] spsWEB;

24     return NULL;

25   }

26   u_int32_t profileLevelId = (spsWEB[1]<<16) | (spsWEB[2]<<8) | spsWEB[3];

27   delete[] spsWEB;

28

29   char* sps_base64 = base64Encode((char*)sps, spsSize);

30   char* pps_base64 = base64Encode((char*)pps, ppsSize);

31

32   char const* fmtpFmt =

33     "a=fmtp:%d packetization-mode=1"

34     ";profile-level-id=%06X"

35     ";sprop-parameter-sets=%s,%s\r\n";

36   unsigned fmtpFmtSize = strlen(fmtpFmt)

37     + 3 /* max char len */

38     + 6 /* 3 bytes in hex */

39     + strlen(sps_base64) + strlen(pps_base64);

40   char* fmtp = new char[fmtpFmtSize];

41   sprintf(fmtp, fmtpFmt,

42           rtpPayloadType(),

43       profileLevelId,

44           sps_base64, pps_base64);

45

46   delete[] sps_base64;

47   delete[] pps_base64;

48

49   delete[] fFmtpSDPLine; fFmtpSDPLine = fmtp;

50   return fFmtpSDPLine;

51 }

在H264VideoRTPSink::auxSDPLine函数中,调用getVPSandSPSandPPS函数获取VPS、SPS和PPS信息,此后将组成的SDP信息发送给RTSP客户端(VLC客户端)。

然后RTSPServer就等待RTSP客户端(VLC客户端)发送SETUP命令,收到SETUP命令后就调用RTSPServer::handleCmd_SETUP函数来处理;

在handleCmd_SETUP函数中,调用OnDemandServerMediaSubsession::getStreamParameters函数;

在getStreamParameters函数中又调用ProxyServerMediaSubsession::createNewStreamSource函数创建FramedSource,调用ProxyServerMediaSubsession::createNewRTPSink函数创建RTPSink。这次调用createNewStreamSource函数的时候传入的参数clientSessionId就是一个非0值,这样在createNewStreamSource函数里,就会发送SETUP命令给IPCamera请求建立连接。并且在收到回复后会回调::continueAfterSETUP(static void continueAfterSETUP),在其中又调用ProxyRTSPClient::continueAfterSETUP函数。

1 void ProxyRTSPClient::continueAfterSETUP() {

2   if (fVerbosityLevel > 0) {

3     envir() << *this << "::continueAfterSETUP(): head codec: " << fSetupQueueHead->fClientMediaSubsession.codecName()

4         << "; numSubsessions " << fSetupQueueHead->fParentSession->numSubsessions() << "\n\tqueue:";

5     for (ProxyServerMediaSubsession* p = fSetupQueueHead; p != NULL; p = p->fNext) {

6       envir() << "\t" << p->fClientMediaSubsession.codecName();

7     }

8     envir() << "\n";

9   }

10   envir().taskScheduler().unscheduleDelayedTask(fSubsessionTimerTask); // in case it had been set

11

12   // Dequeue the first "ProxyServerMediaSubsession" from our 'SETUP queue'.  It will be the one for which this "SETUP" was done:

13   ProxyServerMediaSubsession* smss = fSetupQueueHead; // Assert: != NULL

14   fSetupQueueHead = fSetupQueueHead->fNext;

15   if (fSetupQueueHead == NULL) fSetupQueueTail = NULL;

16

17   if (fSetupQueueHead != NULL) {

18     // There are still entries in the queue, for tracks for which we have still to do a "SETUP".

19     // "SETUP" the first of these now:

20     sendSetupCommand(fSetupQueueHead->fClientMediaSubsession, ::continueAfterSETUP,

21              False, fStreamRTPOverTCP, False, fOurAuthenticator);

22     ++fNumSetupsDone;

23     fSetupQueueHead->fHaveSetupStream = True;

24   } else {

25     if (fNumSetupsDone >= smss->fParentSession->numSubsessions()) {

26       // We've now finished setting up each of our subsessions (i.e., 'tracks').

27       // Continue by sending a "PLAY" command (an 'aggregate' "PLAY" command, on the whole session):

28       sendPlayCommand(smss->fClientMediaSubsession.parentSession(), NULL, -1.0f, -1.0f, 1.0f, fOurAuthenticator);

29           // the "-1.0f" "start" parameter causes the "PLAY" to be sent without a "Range:" header, in case we'd already done

30           // a "PLAY" before (as a result of a 'subsession timeout' (note below))

31       fLastCommandWasPLAY = True;

32     } else {

33       // Some of this session's subsessions (i.e., 'tracks') remain to be "SETUP".  They might get "SETUP" very soon, but it's

34       // also possible - if the remote client chose to play only some of the session's tracks - that they might not.

35       // To allow for this possibility, we set a timer.  If the timer expires without the remaining subsessions getting "SETUP",

36       // then we send a "PLAY" command anyway:

37       fSubsessionTimerTask = envir().taskScheduler().scheduleDelayedTask(SUBSESSION_TIMEOUT_SECONDS*MILLION, (TaskFunc*)subsessionTimeout, this);

38

39     }

40   }

41 }

在ProxyRTSPClient::continueAfterSETUP函数中,为剩余未建立连接的MediaSubsession发送SETUP命令,当所有的MediaSubsession都建立连接后,向IPCamera发送PLAY命令,开始请求传输媒体流。

然后RTSPServer等待RTSP客户端(VLC客户端)的PLAY命令,收到PLAY命令后,调用RTSPServer::RTSPClientSession::handleCmd_PLAY函数进行处理;

然后调用OnDemandServerMediaSubsession::startStream函数,在其中调用StreamState::startPlaying函数;

然后就是H264VideoRTPSink不断地从H264VideoStreamDiscreteFramer中获取数据然后传给RTSP客户端(VLC客户端),而H264VideoStreamDiscreteFramer从MediaSubsession的FramedSource获取数据,MediaSubsession的FramedSource从IPCamera获取数据。

以上就是live555作为代理服务器转发RTSP实时视频的过程,实际上是综合了前面两篇介绍的流程,对于IPCamera作为RTSP客户端,对于VLC作为RTSP服务器端。

关于live555ProxyServer.cpp的几个修改建议:

我们可以使用live555ProxyServer.cpp这个程序很方便地构建一个转发RTSP实时视频的代理服务器,比如转发IPCamera的实时视频。但我经过试验发现这个程序还是存在一些问题,还需要作出一些修改才能更好地作为代理服务器运行。由于楼主理解能力有限,这些修改不一定是从根本上解决问题,仅供大家参考。

(1)main函数的开头有OutPacketBuffer::maxSize = 300000,原本的语句是OutPacketBuffer::maxSize=30000。但我发现转发高清实时视频的时候,VLC会有大面积马赛克,而live555服务器端也打印出"MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for out buffer size .............."。

我们找到这个提示语句在MultiFramedRTPSink::afterGettingFram1函数中,明显从提示的意思来看是说我们RTPSink的缓冲区设置的太小了,而高清视频的一帧数据太大了。MultiFramedRTPSink将数据保存在fOutBuf中,fOutBuf是指向OutPacketBuffer实例的指针,看一下OutPacketBuffer::totalBytesAvailable函数:

1 unsigned totalBytesAvailable() const {

2     return fLimit - (fPacketStart + fCurOffset);

3   }

内容很简单,那么totalBytesAvaiable返回值太小的就说明fLimit太小了,fLimit的值在OutPacketBuffer的构造函数中设置了:

1 OutPacketBuffer

2 ::OutPacketBuffer(unsigned preferredPacketSize, unsigned maxPacketSize, unsigned maxBufferSize)

3   : fPreferred(preferredPacketSize), fMax(maxPacketSize),

4     fOverflowDataSize(0) {

5   if (maxBufferSize == 0) maxBufferSize = maxSize;      // maxBufferSize的默认值是0

6   unsigned maxNumPackets = (maxBufferSize + (maxPacketSize-1))/maxPacketSize;

7   fLimit = maxNumPackets*maxPacketSize;

8   fBuf = new unsigned char[fLimit];

9   resetPacketStart();

10   resetOffset();

11   resetOverflowData();

12 }

可以看出,fLimit的大小取决于maxNumPackets和maxPacketSize,maxPacketSize的值是在MultiFramedRTPSink类的构造函数中设置:

1 MultiFramedRTPSink::MultiFramedRTPSink(UsageEnvironment& env,

2                        Groupsock* rtpGS,

3                        unsigned char rtpPayloadType,

4                        unsigned rtpTimestampFrequency,

5                        char const* rtpPayloadFormatName,

6                        unsigned numChannels)

7   : RTPSink(env, rtpGS, rtpPayloadType, rtpTimestampFrequency,

8         rtpPayloadFormatName, numChannels),

9     fOutBuf(NULL), fCurFragmentationOffset(0), fPreviousFrameEndedFragmentation(False),

10     fOnSendErrorFunc(NULL), fOnSendErrorData(NULL) {

11   setPacketSizes(1000, 1448);

12       // Default max packet size (1500, minus allowance for IP, UDP, UMTP headers)

13       // (Also, make it a multiple of 4 bytes, just in case that matters.)

14 }

15

16 void MultiFramedRTPSink::setPacketSizes(unsigned preferredPacketSize,

17                     unsigned maxPacketSize) {

18   if (preferredPacketSize > maxPacketSize || preferredPacketSize == 0) return;

19       // sanity check

20

21   delete fOutBuf;

22   fOutBuf = new OutPacketBuffer(preferredPacketSize, maxPacketSize);

23   fOurMaxPacketSize = maxPacketSize; // save value, in case subclasses need it

24 }

可以看出maxPacketSize的大小默认值是1448,则fLimit太小就说明了maxBufferSize太小,maxBufferSize = maxSize,因为在OutPacketBuffer类的构造函数声明中可以看到maxBufferSize默认值是0,然后就会被赋值maxSize。而maxSize是OutPacketBuffer类的一个static的成员,因此,只要把OutPacketBuffer::maxSize的值设大一些就可以了。经过测试,我发现设置成300000时就可以转发1080p的高清视频。

(2)当向live555请求某一路视频资源的VLC客户端的数量减少到0时,live555会给出以下错误信息 RTCPInstance error: Hit limit when reading incoming packet over TCP. Increase "maxRTCPPacketSize",我们找到此提示信息在RTCPInstance::incomingReportHandler1函数的最开头。提示信息让我们增大maxRTCPPacketSize的值,可是无论我怎么增大都还是会出现这个信息,无奈不知如何解决,然后觉得关于RTCP的一些包不去处理应该不会对转发数据有太大影响,但这样不停的提示总是很烦的,于是就采用了以下办法:

1 void RTCPInstance::incomingReportHandler1()

2 {

3   do {

4     if (fNumBytesAlreadyRead >= maxRTCPPacketSize) {

5        memset(fInBuf,0,fNumBytesAlreadyRead);

6        fNumBytesAlreadyRead = 0;

7        break;

8     }

9    

10    /*

11     ......................     略去

12     

13    */

14 }

(3)在live555ProxyServer.cpp的main函数中有一个输入参数是streamRTPOverTCP,streamRTPOverTCP默认是false。

首先,想要外网的客户端能访问流媒体服务器,则必须将streamRTPOverTCP设置为True;

其次,想要转发外网的摄像机,也必须将streamRTPOverTCP设置为True。

(4)在ProxyServerMediaSession.cpp文件的ProxyServerMediaSubsession::closeStreamSource函数中,我们需要注释掉if(fHaveSetupStream)这个if语句,因为对于转发实时视频是不支持PAUSE命令的。如果不注释,当请求某一路实时视频的VLC客户端数目减少到0,再有VLC客户端重新请求该视频时就无法再播放了。

(5)对于同一路视频流,当请求的VLC客户端越来越多时,会发现后面请求的VLC客户端正在播放但没有图像。我们找到RTPInterface.cpp文件中RTPInterface的构造函数,注释掉其中调用makeSokcetNonBlocking函数的那一句即可。

1 RTPInterface::RTPInterface(Medium* owner, Groupsock* gs)

2   : fOwner(owner), fGS(gs),

3     fTCPStreams(NULL),

4     fNextTCPReadSize(0), fNextTCPReadStreamSocketNum(-1),

5     fNextTCPReadStreamChannelId(0xFF), fReadHandlerProc(NULL),

6     fAuxReadHandlerFunc(NULL), fAuxReadHandlerClientData(NULL) {

7   // Make the socket non-blocking, even though it will be read from only asynchronously, when packets arrive.

8   // The reason for this is that, in some OSs, reads on a blocking socket can (allegedly) sometimes block,

9   // even if the socket was previously reported (e.g., by "select()") as having data available.

10   // (This can supposedly happen if the UDP checksum fails, for example.)

11        

12   //makeSocketNonBlocking(fGS->socketNum());           //注释掉这一句

13   increaseSendBufferTo(envir(), fGS->socketNum(), 50*1024);

14 }

makeSocketNonBlocking这个函数顾名思义是使某个Socket成为非阻塞式的,在RTPInterface构造函数调用的这一句就是使发送RTP包给VLC客户端的Socket成为非阻塞。由于多个VLC客户端共享RTPInteface缓冲区中的RTP数据,那么当从IPCamera获取数据的速率要快于将缓冲区中的数据发送给所有VLC客户端的速率时(这种情况应该只可能发生在局域网内的测试环境,在生产环境中建议还是不要注释这一句了),缓冲区的数据就会被冲刷导致后面播放的VLC客户端播放不出图像。将makeSocketNonBlocking这一句注释掉后,就会等到给所有的VLC客户端都发送完数据后才会再从IPCamera获取数据。

(6)在OnDemandServerMediaSubsession.cpp文件中,找到OnDemandServerMediaSubsession::deleteStream函数

1 void OnDemandServerMediaSubsession::deleteStream(unsigned clientSessionId,

2                          void*& streamToken) {

3   StreamState* streamState = (StreamState*)streamToken;

4

5   // Look up (and remove) the destinations for this client session:

6   Destinations* destinations

7     = (Destinations*)(fDestinationsHashTable->Lookup((char const*)clientSessionId));

8   if (destinations != NULL) {

9     fDestinationsHashTable->Remove((char const*)clientSessionId);

10

11     // Stop streaming to these destinations:

12     if (streamState != NULL) streamState->endPlaying(destinations);

13   }

14

15   // Delete the "StreamState" structure if it's no longer being used:

16   if (streamState != NULL) {

17     if (streamState->referenceCount() > 0) --streamState->referenceCount();

18     if (streamState->referenceCount() == 0) { //将这一句修改为if(streamState->referenceCount() == 0 && fParentSession->deleteWhenUnreferenced())

19       delete streamState;

20       streamToken = NULL;

21     }

22   }

23

24   // Finally, delete the destinations themselves:

25   delete destinations;

26 }

将if(streamState->referenceCount() == 0)修改为 if(streamState->referenceCount() == 0 && fParentSession->deleteWhenUnreferenced())。修改之前,当请求某路视频资源的VLC客户端的数目减少到0时,就会delete streamState,即释放与该路视频流相关的资源,这样下次再有VLC客户端请求该路视频资源时,就需要重新申请资源,速度会比较慢。而且,在Windows下测试发现,执行delete streamState这一句时偶尔会发生异常崩溃。

ProxyServerMediaSubsession是OnDemandServerMediaSubsession的子类,但对于ProxyServerMediaSubsession而言,我们可以在请求该路视频流的VLC客户端数目减少到0时不释放相关资源,这样后面再有VLC客户端请求时速度就会加快。ServerMeiaSubsessio类中会保存有父会话ServerMediaSession的指针,ServerMediaSession类有一个属性fDeleteWhenUnreferenced,这个属性表示当不再被请求时是否删除会话并释放资源,默认是false。

(7)楼主本想使用此程序开发出一个Live555的代理服务器,结果发现在局域网内,Live555转发IPCamera的视频给VLC客户端,延时都将近3s(这个地方的延时是指VLC点击播放按钮后要等3s才能出来图像),楼主最终也找不到解决的办法(听说出不来图像是因为还没有I帧,但楼主水平有限,对视频编码什么的一窍不通)。哪位仁兄找到解决办法请联系我,谢谢。

(8)问题(7)后来发现是VLC播放器的用法不正确,网上说是设置缓存时间的问题,在局域网内设置缓存时间是有些效果,但对于客户端在外网访问流媒体服务器,还是很慢,后来发现在"工具-首选项"中设置"live555流传输"方式为"RTP over RTSP"即可。如下图所示:

wps18

此问题刚解决后,又发现一个由此而来的新问题:对于同一路流,后面请求的VLC客户端会把前面请求的VLC客户端"挤掉",具体表现是,后开启的客户端开始播放画面时,前面请求的客户端的画面就停止不动了,然后紧接着就和流媒体服务器断开了连接。并且,在之前将"Live555流传输"设置为"HTTP"时没有这个问题。

后来发现将RTPInterface::sendRTPorRTCPPacketOverTCP函数中的if(!sendDataOverTCP(socketNum,framingHeader,4,False))中的False修改为True即可。

作者:昨夜星辰

给我留言

留言无头像?