1. User guides
1.1 Conferencing
- How to establish a video conference?
- How to determine the online status of a contact?
- How to automatically accept incoming calls?
- How to save the message history of a chat session?
- How to use a local movie/audio file for streaming/conference sessions?
1.2 Streaming
- How to setup a streaming server for a local file?
- How to stream a video source to a defined destination?
- How to stream an audio source to a defined destination?
- How to receive and display a video stream from the network?
- How to limit the frame rate of the outgoing video broadcast?
1.3 Recording
- How to record a video stream (independent from the source)?
- How to create a screen shot from a running video stream?
- How to record an audio stream (independent from the source)?
1.4 Screencasting
2. Detailed questions
2.1 Using Homer Conferencing
- Which target platforms are supported by Homer Conferencing and how can the software be installed?
- Which example application scenarios are possible with Homer Conferencing?
- Which webcams are supported by Homer Conferencing? Is a webcam mandatory for using Homer Conferencing?
- What means "desktop observation" in detail? Can Homer Conferencing be used as spyware?
- Where are the persistent settings of Homer Conferencing stored in the system?
- In which format is the user name and password for a SIP server (PBX box) stored?
- Which video codec should be used?
- Which audio codec should be used?
- Where is additional information about the currently used system environment listed?
- Why are there so many configuration options in Homer Conferencing?
2.2 Using the network with Homer Conferencing
- What means "QoS support"? How can it be disabled?
- What means "IPv6 support"? How can it be disabled?
- Which server ports are open if Homer Conferencing is used?
- Does Homer Conferencing support NAT traversal?
- What means "NAT outmost address"?
- What means "NAT type"?
- Which ports of the NAT box have to be forwarded to my local machine in order to be reachable from foreign hosts in the Internet?
- What can I do if my "Fritzbox" reports collisions regarding port 5060?
2.3 The project "Homer Conferencing"
- Is Homer Conferencing commercial or will it be commercial in the near future?
- What is the purpose of the project?
- What was the origin of Homer Conferencing?
2.4 The architecture of "Homer Conferencing"
- What is the general architecture of Homer Conferencing?
- Which data flows are possible during video processing?
- Which data flows are possible during audio processing?
- Which external libraries (dependencies) are used by Homer Conferencing?
- Which other SIP softphones are tested for interoperability with Homer Conferencing?
- Which path does the data flow take when the live picture from a connected video input device is captured?
- Which path does the data flow take when the live data from a connected audio input device is captured?
- Which data flows are possible for audio playback?
2.5 Building Homer Conferencing
- Which target platforms are supported by the Homer Conferencing source code?
- From where can I download a statically linked version of ffmpeg/libx264 for Windows?
- How can a statically linked version of ffmpeg be created for Linux/OS X?
- How can a statically linked version of libx264 be created for Linux/OS X?
- How can a statically linked version of Qt be created for Linux/OS X?
- How can a statically linked version of portaudio be created for OS X?
- How can a build environment be created for Windows?
- How can a build environment be created in OS X?
- How can I solve the error "libstdc++.so.6: version `GLIBCXX_3.4.14' not found"?
- How can I change the line delimiters of a file?
2.6 Developing Homer Conferencing
- What is the purpose of the "Header_*.h" files?
- Why aren't the Qt libraries used within the Homer Conferencing libraries?
- Why aren't the BOOST libraries used within the Homer Conferencing?
2.7 Debugging Homer Conferencing
- How can a more verbose debug output from Homer Conferencing be activated?
- How can Homer Conferencing be triggered to write debug output to a file?
- How can Homer Conferencing be triggered to send debug messages through the network to an external application?
2.8 Network protocols and their concepts
- What is an RFC?
- What is SIP? Which advantages and disadvantages does SIP have?
- What is SDP?
- What is STUN and what is an STUN server?
- What is RTP and RTCP?
- What is RTSP?
- What is UDP-Lite?
- What is DCCP?
- What is SCTP?
1. User guides
1.1 Conferencing
- How to establish a video conference?
- How to determine the online status of a contact?
- How to automatically accept incoming calls?
- How to save the message history of a chat session?
- How to use a local movie/audio file for streaming/conference sessions?
1.2 Streaming
- How to setup a streaming server for a local file?
- How to stream a video source to a defined destination?
- How to stream an audio source to a defined destination?
- How to receive and display a video stream from the network?
- How to limit the frame rate of the outgoing video broadcast?
Step 1: at server side - select online status "Online (auto)"
Step 2: at server side - select a local movie/audio file (or multiple files) as input for streaming, they will appear in the playlist"
Step 3: at each client side - see steps 1 and 2 of "How to establish a video conference?" and deactivate video/audio broadcast
Step 4: at each client side - apply steps 3 and 4 of "How to establish a video conference?" in order to contact the server
1.3 Recording
- How to record a video stream (independent from the source)?
- How to create a screen shot from a running video stream?
- How to record an audio stream (independent from the source)?
1.4 Screencasting
2. Detailed questions
2.1 Using Homer Conferencing
- Which target platforms are supported by Homer Conferencing and how can the software be installed?
- Which example application scenarios are possible with Homer Conferencing?
- Which webcams are supported by Homer Conferencing? Is a webcam mandatory for using Homer Conferencing?
- What means "desktop observation" in detail? Can Homer Conferencing be used as spyware?
- Where are the persistent settings of Homer Conferencing stored in the system?
- In which format is the user name and password for a SIP server (PBX box) stored?
- Which video codec should be used?
- Which audio codec should be used?
- Where is additional information about the currently used system environment listed?
- Why are there so many configuration options in Homer Conferencing?
Homer Conferencing is released for Windows, Linux and OS X.
For Windows and Linux you have two options: the archive and the installer. In case you selected the archive, you have to extract the complete file to a folder of your choice. Afterwards, you have to start the included executable. In case you selected the installer, you can directly use it for installing Homer Conferencing. Afterwards, you can start the installed executable from the installation folder.
For OS X, an DMG image file is available. After succesful download you have to open the file by the help of the "Disk Image Mounter" which is part of each OS X release. Drag the Homer Conferencing package from the disc image and drop it above the "Application" folder. Afterwards, you'll find Homer Conferencing among your OS X applications. If you want you can also add Homer Conferencing to your OS X dock.
The software supports four fields of application: conferencing, streaming, recording and screencasting.
Homer Conferencing is available for Windows, Linux and OS X. This also includes the general webcam support. However, there a differences in hardware support. For Windows you may use any hardware for which you have working system drivers. In case of Linux you have to use webcams which work with the Video4Linux2 framework. For OS X systems the support for webcam capturing is still work-in-progress.
In general, Homer Conferencing does not need a webcam. You may use the desktop content or a local video file instead for live conferencing. Moreover, you might also deactivate every kind of video transmission in the configuration dialog.
With Homer Conferencing you are able to share the desktop view with a remote user in order to present some processes on your desktop screen. This feature has to be selected explicitly. In general, the software does not support a hidden observation. Homer Conferencing only broadcasts data which is shown in the "Broadcast" window. Please, do not post requests for such hidden spyware features.
There exist several settings which have to be stored in a persistent way in order to provide them for the next program start.
For Windows systems these settings are stored within the system registry. In Linux environments all settings are stored in a text file in the folder ".config" within your home directory.
On OS X systems all settings are stored in a text file with the ending ".plist".
This represents the official behavior of the Qt library. See its official documentation (especially "QSettings") for further details about this procedure.
Both user name and password are stored in human readable format. They are included in your program settings which are stored in a persistent way either in your registry (Windows) or in a local file on your hard disc (Linux and OS X).
The default video codec is H.261. It provides a low video quality but it also reduces the acquired network bandwidth. Therefore, it should work in most cases. But if you use a local network with 100 Mbit/s or more available bandwidth, you should rather use video codecs such as H.263(+) and H.264.
The default audio codec is MP3. It provides a high audio quality and acquires also an acceptable network bandwidth. Other audio codecs such as G711, G722 and PCM16 are available for conferences with participants who use other SIP softphones. Such siftphones could be incompatible with the MP3 audio codec.
The help dialog, which you can enter via the program menu "Homer" »» "Online help", shows on the left-hand side some extended system information. This might help if you want to compare two different installations.
The configuration dialog was designed in order to allow adjusting almost every kind of program parameter. The software should be useful for every kind of user. This includes beginners and interested network specialists.
2.2 Using the network with Homer Conferencing
- What means "QoS support"? How can it be disabled?
- What means "IPv6 support"? How can it be disabled?
- Which server ports are open if Homer Conferencing is used?
- Does Homer Conferencing support NAT traversal?
- What means "NAT outmost address"?
- What means "NAT type"?
- Which ports of the NAT box have to be forwarded to my local machine in order to be reachable from foreign hosts in the Internet?
- What can I do if my "Fritzbox" reports collisions regarding port 5060?
QoS means "quality of service", which stands for the provided service quality. For example, an audio stream can be transmitted through the network with and without a very high delay. Depending on this service quality such an audio stream is useful for real-time live chats or only for on-demand streaming with a good buffering mechanism.
At the moment this feature is only used for network experiments and is deactivated in release versions. The support for QoS of Homer Conferencing can be disabled by the command line option "-Disable=QoS".
At the moment this feature is only used for network experiments and is deactivated in release versions. The support for QoS of Homer Conferencing can be disabled by the command line option "-Disable=QoS".
IPv6 stands for version 6 of the "Internet Protocol" (IP), the successor of version 4 (IPv4). In general, IPv4 is supported by all networks. Sometimes it can be useful to deactivate the IPv6 support of Homer Conferencing, e.g., for solving compatibility problems of the used operating system.
The support for IPv6 of Homer Conferencing can be disabled by the command line option "-Disable=IPv6".
Based on the default settings, Homer Conferencing offers its conference management service via an open SIP management port. The default setting is UDP port 5060. However, the actually used port depends on the program settings. Additionally, for each conference session further network ports will be open: one for the incoming video stream and one for the incoming audio stream.
If you deactivate the conferencing support either by the help of the configuration dialog or by the command line option "-Disable=Conferencing", Homer Conferencing will only open network ports when starting a preview of a network stream.
NAT means "Network Address Translation". It is used to map local IP addresses (192.168.x.y, 10.x.y.z, 172.16.x.y, ..) to globally unique addresses and allow a bidirectional communication between local client hosts and hosts outside the local network. The applied address mapping represents very often a challenge for communication software like SIP softphones. The solution for this is very often based on the STUN protocol and allows a "NAT traversal". Homer Conferencing does support this technique. But you have to activate this feature in the configuration dialog explicitly because it is utilizes additional communications with a server from outside your local network. You have to select one of the suggested or an own STUN server in order to allow Homer Conferencing determining the NAT outmost address, which is needed for successful NAT traversal.
It is possible to cascade several NAT boxes and create a chain of NAT points. In such a scenario, the STUN protocol is only able to determine from outside the address of the outmost interface which is directly linked to the global Internet where the STUN server is located. If you want to apply NAT traversal in such an environment you have to use symmetric NAT at every NAT point.
Remember: If you activate NAT traversal and activate within the contact list the feature "check availability of contacts", you should be reachable for these known contacts.
If you want to be reachable for unknown hosts or contacts (maybe in case you want to offer a generally available streaming server) you have to forward the following ports from the outside interface of your NAT box to your local host within your local network (described for the default program settings): port 5060 UDP for conference management, port 5000 UDP for receiving the incoming video stream of the first participant, port 5002 UDP for receiving the audio stream of the first participant. Additionally, for each additional participant, at least one port for the incoming video stream and one for the incoming audio stream have to be forwarded. However, you may generalize these rules to the following one: ports 5000-5060 UDP/TDP. This allows enough incoming video/audio streams for standard applications and supports UDP as well as TCP for data transport.
If you want to use different ports for receiving video/audio streams or conference management, you can adjust the used port numbers within the configuration dialog.
If you want to be reachable for unknown hosts or contacts (maybe in case you want to offer a generally available streaming server) you have to forward the following ports from the outside interface of your NAT box to your local host within your local network (described for the default program settings): port 5060 UDP for conference management, port 5000 UDP for receiving the incoming video stream of the first participant, port 5002 UDP for receiving the audio stream of the first participant. Additionally, for each additional participant, at least one port for the incoming video stream and one for the incoming audio stream have to be forwarded. However, you may generalize these rules to the following one: ports 5000-5060 UDP/TDP. This allows enough incoming video/audio streams for standard applications and supports UDP as well as TCP for data transport.
If you want to use different ports for receiving video/audio streams or conference management, you can adjust the used port numbers within the configuration dialog.
If you have selected port 5060 to be forwarded to an internal host like depicted in the following picture:
(picture was taken from the web interface of AVM equipment)
..you might get an error like depicted in the following picture:
(picture was taken from the web interface of AVM equipment)
..about a colliding forwarding rule. To solve this problem, you have to use port 5061 instead because those NAT boxes use a static forwarding rule for port 5060 in order to make their internal SIP server available for the outside.

(picture was taken from the web interface of AVM equipment)
..you might get an error like depicted in the following picture:

(picture was taken from the web interface of AVM equipment)
..about a colliding forwarding rule. To solve this problem, you have to use port 5061 instead because those NAT boxes use a static forwarding rule for port 5060 in order to make their internal SIP server available for the outside.
2.3 The project "Homer Conferencing"
- Is Homer Conferencing commercial or will it be commercial in the near future?
- What is the purpose of the project?
- What was the origin of Homer Conferencing?
Homer Conferencing is an open source project and everyone is allowed to use the software as it is (without warranty). A commercial release isn't planned.
The Homer software should be usable as free of charge video chat and streaming tool, which doesn't need any centralized infrastructure.
Moreover, it allows to configure almost every aspect of the audio-visual streaming, especially the network access and the usage of transport protocols.
The project was formerly started for the MetraLabs GmbH. Later, it was continued for the
Integrated Communication Systems Group and was partly integrated in the Forwarding on Gates Simulator/Emulator.
2.4 The architecture of "Homer Conferencing"
- What is the general architecture of Homer Conferencing and which are its main components?
- Which data flows are possible during video processing?
- Which data flows are possible during audio processing?
- Which external libraries (dependencies) are used by the Homer Conferencing components?
- Which other SIP softphones are tested for interoperability with Homer Conferencing?
- Which path does the data flow take when the live picture from a connected video input device is captured?
- Which path does the data flow take when the live data from a connected audio input device is captured?
- Which data flows are possible for audio playback?
See the following picture:
Homer (GUI): user interface for interaction with user, provides needed graphical views and controls
Multimedia (library): A/V hardware access, A/V streams encoding/decoding, RTP packet processing
Conference (library): SIP based conference management, SDP packet processing
SoundOutput (library): hardware access for sound playback
Monitor (library): observation of threads, measurement of data flows and generating of packet statistics
NAPI (library): Network-API: abstracted Berkeley sockets, multiplexing between simultaneously available socket implementations
Base (library): Operating system abstraction for Windows, Linux and OS X

Homer (GUI): user interface for interaction with user, provides needed graphical views and controls
Multimedia (library): A/V hardware access, A/V streams encoding/decoding, RTP packet processing
Conference (library): SIP based conference management, SDP packet processing
SoundOutput (library): hardware access for sound playback
Monitor (library): observation of threads, measurement of data flows and generating of packet statistics
NAPI (library): Network-API: abstracted Berkeley sockets, multiplexing between simultaneously available socket implementations
Base (library): Operating system abstraction for Windows, Linux and OS X
See the following picture:
Details about "MediaSource Camera" can be found here:
"Which path does the data flow take when the live picture from a connected video input device is captured?"

Details about "MediaSource Camera" can be found here:
"Which path does the data flow take when the live picture from a connected video input device is captured?"
See the following picture:
Details about "MediaSource Microphone" can be found here:
Which path does the data flow take when the live data from a connected audio input device is captured?

Details about "MediaSource Microphone" can be found here:
Which path does the data flow take when the live data from a connected audio input device is captured?
In the following you see which libraries are used by which software components. You can click on each library to get its source code:
Homer (GUI): Qt
Multimedia: ffmpeg, asound, portaudio
Conference: sofia-sip
SoundOutput (only OS X): asound, SDL, SDL_sound, SDL_mixer
Monitor: none
NAPI: none
Base: none
Homer (GUI): Qt
Multimedia: ffmpeg, asound, portaudio
Conference: sofia-sip
SoundOutput (only OS X): asound, SDL, SDL_sound, SDL_mixer
Monitor: none
NAPI: none
Base: none
Interoperability tests are focused on open source software such as "Ekiga" and "Linphone".
2.5 Building Homer Conferencing
- Which target platforms are supported by the Homer Conferencing source code?
- From where can I download a statically linked version of ffmpeg/libx264 for Windows?
- How can a statically linked version of ffmpeg be created for Linux/OS X?
- How can a statically linked version of libx264 be created for Linux/OS X?
- How can a statically linked version of Qt be created for Linux/OS X?
- How can a statically linked version of portaudio be created for OS X?
- How can a build environment be created for Windows?
- How can a build environment be created in OS X?
- How can I solve the error "libstdc++.so.6: version `GLIBCXX_3.4.14' not found"?
- How can I change the line delimiters of a file?
The current source code can be compiled under Windows (based on MinGW), Linux, OS X (based on gcc from the XCode package) and FreeBSD.
The official Homer Conferencing releases for Windows are based on the dll builds from Zeranoe.
The current Homer Conferencing releases for Linux and OS X are based on a statically linked version of ffmpeg in order to avoid dependencies from system specifics. For creating such a statically linked version of ffmpeg, we use the original source code and configure it via the following command line:
After the configure step the build process can be started based on the "make" command.
./configure --enable-static --enable-shared --enable-pic --enable-libfaac --enable-gpl --enable-nonfree --enable-runtime-cpudetect --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libmp3lame --enable-libx264 --enable-libtheora --enable-zlib --enable-version3 --disable-vaapi --disable-vdpau --disable-outdev=sdl --disable-ffmpeg --disable-ffplay --disable-ffprobe --disable-ffserver --enable-libvpx --disable-symver
This needs some additional libraries such as faac, opencore-amr, mp3lame, x264, theora and vpx.After the configure step the build process can be started based on the "make" command.
The current Homer Conferencing release for Linux and OS X are based on a statically linked version of x264 in order to avoid dependencies from system specifics and provide a distribution independent release version. To create such a statically linked version of library x264 we use the source code and configure it via the following command line:
./configure --enable-pic --enable-static --enable-shared
After the configure step the static build process can be started via the make command.
Current Homer Conferencing releases for Linux and OS X are based on a statically linked version of Qt in order to avoid dependencies from system specifics. For creating such a statically linked version of Qt for Linux, we use the original source code and configure it via the following command line:
For OS X based systems, you can use the following configure command:
After the configure step the static build process can be started via the make command. If the Qt library was successfully built on the OS X system, each library has to be manipulated in order to link against them during compile time and also tell the runtime linker the path to the corresponding Qt library. The following command gives an example for QtXmlPatterns:
./configure -shared -release -qt-zlib -no-gif -qt-libpng -qt-libmng -qt-libjpeg -qt-libtiff -no-openssl -no-glib -opensource -webkit -nomake demos -nomake examples -nomake tools -no-phonon -dbus-linked
For OS X based systems, you can use the following configure command:
./configure -shared -release -qt-zlib -no-gif -qt-libpng -qt-libmng -qt-libjpeg -qt-libtiff -no-openssl -opensource -webkit -nomake demos -nomake examples -nomake tools -no-phonon -no-dbus -no-framework
After the configure step the static build process can be started via the make command. If the Qt library was successfully built on the OS X system, each library has to be manipulated in order to link against them during compile time and also tell the runtime linker the path to the corresponding Qt library. The following command gives an example for QtXmlPatterns:
install_name_tool -id @executable_path/../lib/libQtXmlPatterns.dylib ./libQtXmlPatterns.dylib
The current Homer Conferencing release for OS X is based on a statically linked version of portaudio in order to avoid dependencies from system specifics. For creating such a statically linked version of the library portaudio, we use the original ource code (v19) and configure it via the following command line:
Afterwards, you have to delete the line "#include <AudioToolbox/AudioToolbox>" in ./include/pa_mac_core.h. Moreover, you have to remove the string "-Werror" in the file ./Makefile. Now, you should be able to build the statically linked library via the make command.
./configure --disable-mac-universal
Afterwards, you have to delete the line "#include <AudioToolbox/AudioToolbox>" in ./include/pa_mac_core.h. Moreover, you have to remove the string "-Werror" in the file ./Makefile. Now, you should be able to build the statically linked library via the make command.
To build Homer Conferencing on Windows systems, you have to execute the following steps:
MinGW: download the software from http://www.mingw.org and install with the selected options for C++ compiler, MSys, MinGW developer toolkit
MinGW-Configuration: put the path to your MinGW gcc binary in the Windows system path
CMake: download the software from http://www.cmake.org for Windows and install it, let it insert an entry in your system path
Qt: download the software from http://qt-project.org/downloads for Windows and install it
Qt-Configuration: put the path to your Qt qmake binary in the Windows system path
Afterwards, you have a working build environment for Windows.
MinGW: download the software from http://www.mingw.org and install with the selected options for C++ compiler, MSys, MinGW developer toolkit
MinGW-Configuration: put the path to your MinGW gcc binary in the Windows system path
CMake: download the software from http://www.cmake.org for Windows and install it, let it insert an entry in your system path
Qt: download the software from http://qt-project.org/downloads for Windows and install it
Qt-Configuration: put the path to your Qt qmake binary in the Windows system path
Afterwards, you have a working build environment for Windows.
To build Homer Conferencing on OS X systems, you have to execute the following steps:
GCC/Make: You have to install the XCode package from Apple Inc. This includes a special distribution of the gcc compiler and all needed command line tools for the basic build process.
CMake: download the software from http://www.cmake.org for OS X and install it
Qt: download the software from http://qt-project.org/downloads for OS X and install it
System: we suggest using bash as your default shell, configure in .bashrc (located in your home directory) the location of your Qt installation:
Afterwards, you have a working build environment for OS X.
GCC/Make: You have to install the XCode package from Apple Inc. This includes a special distribution of the gcc compiler and all needed command line tools for the basic build process.
CMake: download the software from http://www.cmake.org for OS X and install it
Qt: download the software from http://qt-project.org/downloads for OS X and install it
System: we suggest using bash as your default shell, configure in .bashrc (located in your home directory) the location of your Qt installation:
export QT_QMAKE_EXECUTABLE=/Users/yourUser/yourFolder/qt4.8.3/bin/qmake
Afterwards, you have a working build environment for OS X.
If you encounter such an error message during build process, you used a different version of gcc during compile time than your current system has. In order to solve this problem, you either have to use the same gcc version during compile time and runtime, or you attach the needed library libstdc++ of your build system to the build release in order to make it redistributable.
If you mixed the line delimiters style of UNIX/Linux and Windows systems, you can easily solve this by using Eclipse. In the program menu "File" »»"Convert Line Delimiters To" you find the needed function.
2.6 Developing Homer Conferencing
- What is the purpose of the "Header_*.h" files?
- Why aren't the Qt libraries used within the Homer Conferencing libraries?
- Why aren't the BOOST libraries used within Homer Conferencing?
Some software components of Homer Conferencing use external libraries. In this case, every dependency of the source code is encapsulated in a special Header_*.h include file.
The goal is to implement a hard separation between multimedia/conference management and GUI. Moreover, the basic software components should remain as portable as possible.
The basic software components should remain as independent as possible from huge software libraries such as BOOST or Qt.
2.7 Debugging Homer Conferencing
- How can a more verbose debug output from Homer Conferencing be activated?
- How can Homer Conferencing be triggered to write debug output to a file?
- How can Homer Conferencing be triggered to send debug messages through the network to an external application?
In a Linux environment, it is possible to start Homer Conferencing with the option "-DebugLevel=Verbose". This brings up a lot of verbose debug outputs on the command line.
For Windows or OS X, you can use the file "Homer.log", which is automatically created in your home folder during each program run.
For Windows or OS X, you can use the file "Homer.log", which is automatically created in your home folder during each program run.
The debug outputs are automatically written to the file "Homer.log" in your home folder. Additionally, you can add "-DebugOutputFile=myFile.log" to the command line options. This triggers additional debug outputs to the file "myFile.log".
You can add "-DebugOutputNetwork=myHost:1234" to the command line options. This triggers additional debug outputs based on UDP to port 1234 on host "myHost".
2.8 Network protocols and their concepts
- What is an RFC?
- What is SIP? Which advantages and disadvantages does SIP have?
- What is SDP?
- What is STUN and what is an STUN server?
- What is RTP and RTCP?
- What is RTSP?
- What is UDP-Lite?
- What is DCCP?
- What is SCTP?
RFC means "request for comments" and represents technical or organizational documents from the RFC editor, which is part of the Internet society.
See http://www.rfc-editor.org/rfcfaq.html for further information.
SIP means "Session Initiation Protocol" (RFC2543/RFC3261) and represents a signaling protocol, which is used for managing video/audio/text chats.
In contrast to H.323 and its sub protocols, SIP is kept simple and incorporates many elements of the Hypertext Transfer Protocol (HTTP) and Simple Mail Transfer Protocol (SMTP).
As one of the results of this approach, each SIP signaling message has a human readable structure. This supports an easy debugging of signaling processes of complex applications using SIP. However, some SIP based products (hardware and software) remains incompatible to each other. Moreover, media descriptions via the used sub protocol SDP are not as standardized as needed for current video/audio codecs.
SDP means "Session Description Protocol" (RFC2327/RFC4566). It is used as part of the SIP based signaling when triggering a call to another conference participant. Based on SDP, all needed media streams can be described and can further be negotiated by both participant of a conference session.
STUN means "Simple Traversal of UDP through NATs" (RFC3489) or "Session Traversal Utilities for NAT" (RFC5389) and represents a signaling protocol for assisting devices behind a NAT box with their packet routing. An STUN server acts as remote station for detecting the own outmost NAT address. For this purpose the STUN protocol is used to send a request to the STUN server and it responds by sending back the desired information about the outmost NAT address.
RTP means "Real-Time Transport Protocol" (RFC1889/RFC3550) and uses a sub protocol called RTCP ("RTP Control Protocol"). RTP is used for the transmission of audio-visual data through networks.
RTSP means "Real Time Streaming Protocol" (RFC2326). It was designed for controlling (play, stop,..) media streaming servers.
UDP-Lite is a light-weight version of UDP and was defined in (RFC3828). It was designed as variation of UDP and allows a potentially damaged data payload to be delivered to an application rather than being discarded by the receiving station. This is useful for video/audio streaming where a damaged playback is better than displaying nothing.
DCCP means "Datagram Congestion Control Protocol" (RFC4340). It was designed for combining message-oriented transport with a reliable connection setup, teardown, ""Explicit Congestion Notification"" (ECN), congestion control, and feature negotiation. It represents an alternative transport protocol, especially for today's congested networks.
SCTP means "Stream Control Transmission Protocol" (RFC4960). It was designed as alternative to UDP and TCP. SCTP supports message-oriented transmissions like UDP, but it is also able to ensure a reliable, in-sequence transport of messages with congestion control like TCP. Moreover, current SCTP enhancements add multipath support to SCTP, e.g., "Concurrent Multipath Transfer" (CMT-SCTP).
info@homer-conferencing.com.
Copyright annotations
All depicted movie screenshots are based on the free movie "big buck bunny" ((c) copyright 2008, Blender Foundation).For further information see the author's web site: http://www.bigbuckbunny.org/.