Skip to content

Video Calling

ConnectyCube Video Calling P2P API is built on top of WebRTC protocol and based on top of WebRTC Mesh architecture.

Max people per P2P call is 4.

To get a difference between P2P calling and Conference calling please read our ConnectyCube Calling API comparison blog page.

Get started with SDK

Follow the Getting Started guide on how to connect ConnectyCube SDK and start building your first app.

Code samples

There are ready-to-go FREE code samples to help you better understand how to integrate video calling capabilities in your apps:


Installation with CocoaPods

CocoaPods is a dependency manager for Objective-C and Swift, which automates and simplifies the process of using 3rd-party frameworks or libraries like ConnectyCubeCalls in your projects. You can follow their getting started guide if you don’t have CocoaPods installed.

Copy and paste the following lines into your podfile:

Terminal window
pod 'ConnectyCube'

Now you can install the dependencies in your project:

Terminal window
$ pod install

From now on, be sure to always open the generated Xcode workspace (.xcworkspace) instead of the project file when building your project.

Importing framework

At this point, everything is ready for you to start using ConnectyCube and ConnectyCubeCalls frameworks. Just import the frameworks wherever you need to use them:

import ConnectyCube

Run script phase for archiving

Add a “Run Script Phase” to build phases of your project. Paste the following snippet into the script:

Terminal window
bash "${BUILT_PRODUCTS_DIR}/${FRAMEWORKS_FOLDER_PATH}/ConnectyCubeCalls.framework/"

This fixes the known Apple bug, that does not allow to publish archives to the App store with dynamic frameworks that contain simulator platforms. Script is designed to work only for archiving.

Swift namespacing

ConnectyCubeCalls framework supports simple swift names, which means that instead of, for example, CYBCallSession you can just type CallSession. Sometimes it might get you an error: __someclass__ is ambiguous for type lookup in this context, which means that there is another class with the same swift name, and they are conflicting (swift does not understand what class do you want in the current context to be):

Swift ambiguous name

In this case you must specify a namespace of that specific class with ., which has the same name as framework and will be:



ConnectyCube Chat API is used as a signaling transport for Video Calling API, so in order to start using Video Calling API you need to connect to Chat.


Before any interaction with ConnectyCubeCalls you need to initialize it once using the method below:



Logging is a powerful tool to see the exact flow of the ConnectyCubeCalls framework and analyze its decisions. By enabling logs you will be able to debug most issues, or perhaps help us analyze your problems.

Basic logs are enabled by default. To enable verbose logs use the method below:

ConnectycubeSettings().isDebugEnabled = true

To get more info about active call you can also enable stats reporting.

Background mode

You can use our SDK in the background mode as well, however this requires you to add a specific app permissions. Under the app build settings, open the Capabilities tab. In this tab, turn on Background Modes and set the Audio, AirPlay and Picture in Picture checkbox to set the audio background mode.

Background modes

If everything is correctly configured, iOS provides an indicator that your app is running in the background with an active audio session. This is seen as a red background of the status bar, as well as an additional bar indicating the name of the app holding the active audio session — in this case, your app.

Client delegate

In order to operate and receive calls you need to setup client delegate. Your class must conform to RTCCallSessionCallback (CYBCallClientDelegate v1 deprecated) protocol. Use the method below to subscribe:

ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)

Initiate a call

let opponentsIds = [3245, 2123, 3122].map{KotlinInt(value: $0)}
let newSession = ConnectyCube().p2pCalls.createSession(userIds: opponentsIds, callType:
// userInfo - the custom user information dictionary for the call. May be nil.
let userInfo = ["key":"value"] as? KotlinMutableDictionary<NSString, NSString> // optional
newSession.startCall(userInfo: userInfo)

After this your opponents will receive one call request per 5 second for a duration of 45 seconds (you can configure these settings with WebRTCConfig (CYBCallConfig v1 deprecated):

//MARK: RTCCallSessionCallback
ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)
extension YourClass: RTCCallSessionCallback {
func onReceiveNewSession(session: P2PSession) {
if self.session != nil {
// we already have a video/audio call session, so we reject another one
// userInfo - the custom user information dictionary for the call from caller. May be nil.
let userInfo = ["key":"value"] as? KotlinMutableDictionary<NSString, NSString> // optional
session.rejectCall(userInfo: userInfo)
// saving session instance here
self.session = session

self.session refers to the current session. Each particular audio - video call has a unique sessionID. This allows you to have more than one independent audio-video conferences. If you want to increase the call timeout, e.g. set to 60 seconds:

WebRTCConfig().answerTimeInterval = 60

Default value is 60 seconds.

In case opponent did not respond to your call within a specific timeout time, the callback listed below will be called:

//MARK: RTCCallSessionCallback
ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)
func onUserNotAnswer(session: P2PSession, opponentId: Int32) {

Accept a call

In order to accept a call, use the P2PSession method below:

// userInfo - the custom user information dictionary for the accept call. May be nil.
let userInfo: KotlinMutableDictionary<NSString, NSString> = ["key":"value"] // optional
self.session?.acceptCall(userInfo: userInfo)

After this your opponent will receive an accept signal:

//MARK: RTCCallSessionCallback
ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)
func onCallAcceptByUser(session: P2PSession, opponentId: Int32, userInfo: [String : Any]?) {

Reject a call

In order to reject a call, use the P2PSession method below:

// userInfo - the custom user information dictionary for the reject call. May be nil.
let userInfo: KotlinMutableDictionary<NSString, NSString> = ["key":"value"] // optional
self.session?.rejectCall(userInfo: userInfo)
// and release session instance
self.session = nil

After this your opponent will receive a reject signal:

//MARK: RTCCallSessionCallback
ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)
func onCallRejectByUser(session: P2PSession, opponentId: Int32, userInfo: [String : Any]?) {

End a call

In order to end a call, use the P2PSession method below:

// userInfo - the custom user information dictionary for the reject call. May be nil.
let userInfo: KotlinMutableDictionary<NSString, NSString> = ["key":"value"] // optional
self.session?.hangUp(userInfo: userInfo)
// and release session instance
self.session = nil

After this your opponent will receive a hangup signal:

//MARK: RTCCallSessionCallback
ConnectyCube().p2pCalls.addSessionCallbacksListener(callback: self)
func onReceiveHangUpFromUser(session: P2PSession, opponentId: Int32, userInfo: [String : Any]?) {

Connection life cycle

All starts when you have received new session and accepted the call.

//not supported

After that webrtc will perform all operations that needed to connect both users internally, and you will either get onConnectedToUser or onDisconnectedFromUser if connection failed to connect for some reason:

//MARK: RTCSessionStateCallback
ConnectyCube().p2pCalls.addSessionStateCallbacksListener(callback: self)
func onConnectedToUser(session: BaseSession<AnyObject>, userId: Int32) {
func onDisconnectedFromUser(session: BaseSession<AnyObject>, userId: Int32) {

When you or your opponent close the call, you will receive onDisconnectedFromUser callback first, and then onConnectionClosedForUser when connection is fully closed:

//MARK: RTCSessionStateCallback
ConnectyCube().p2pCalls.addSessionStateCallbacksListener(callback: self)
func onDisconnectedFromUser(session: BaseSession<AnyObject>, userId: Int32) {
func onConnectionClosedForUser(session: BaseSession<AnyObject>, userId: Int32) {

Session states

Each session has its own state. You can always access current state by simply calling the P2PSession property:

let sessionState = self.session?.state

You can also receive a live time callbacks on session changing its own state:

//MARK: RTCSessionStateCallback
ConnectyCube().p2pCalls.addSessionStateCallbacksListener(callback: self)
func onStateChanged(session: BaseSession<AnyObject>, state: BaseSessionRTCSessionState) {

Here are all possible states that can occur:

  • RTC_SESSION_NEW: session was successfully created and ready for the next step
  • RTC_SESSION_PENDING: session is in pending state for other actions to occur
  • RTC_SESSION_CONNECTING: session is in progress of establishing connection
  • RTC_SESSION_CONNECTED: session was successfully established
  • RTC_SESSION_GOING_TO_CLOSE: session is going to close
  • RTC_SESSION_CLOSED: session was closed

Connection state

Use session state to know connection state. You can always access current state by simply calling the P2PSession property:

let sessionState = self.session?.state

Show local video

In order to show your local video track from camera you should create UIView on storyboard and then use the following code:

// your view controller interface code
//MARK: VideoTracksCallback
ConnectyCube().p2pCalls.addVideoTrackCallbacksListener(callback: self)
var localVideo = RTCMTLVideoView()
override func viewDidLoad() {
private func setupVideo(\_ videoView: RTCMTLVideoView) {
insertSubview(videoView, at: 0)
videoView.translatesAutoresizingMaskIntoConstraints = false
videoView.videoContentMode = .scaleAspectFill
func onLocalVideoTrackReceive(session: BaseSession<AnyObject>, videoTrack: ConnectycubeVideoTrack) {
videoTrack.addSink(videoSink: VideoSink(renderer: localVideo))

Show remote video

// your view controller interface code
//MARK: VideoTracksCallback
ConnectyCube().p2pCalls.addVideoTrackCallbacksListener(callback: self)
var remoteVideo = RTCMTLVideoView()
private func setupVideo(\_ videoView: RTCMTLVideoView) {
insertSubview(videoView, at: 0)
videoView.translatesAutoresizingMaskIntoConstraints = false
videoView.videoContentMode = .scaleAspectFill
func onRemoteVideoTrackReceive(session: BaseSession<AnyObject>, videoTrack: ConnectycubeVideoTrack, userId: Int32) { {
videoTrack.addSink(videoSink: VideoSink(renderer: remoteVideo))

You can always get remote video tracks for a specific user ID in the call using these P2PSession methods (assuming that they are existent):

let remoteVideoTrack = session?.mediaStreamManager?.videoTracks[24450] as! ConnectycubeVideoTrack // video track for user 24450

Mute audio

You can disable/enable audio during a call:

self.session?.mediaStreamManager?.localAudioTrack?.enabled = !(self.session?.mediaStreamManager?.localAudioTrack!.enabled)!

Mute remote audio

You can always get remote audio tracks for a specific user ID in the call using these P2PSession methods (assuming that they are existent):

let remoteAudioTrack = self.session?.mediaStreamManager?.audioTracks[24450] as! ConnectycubeAudioTrack// audio track for user 24450

You can also mute remote media tracks on your side, by changing value of enabled property for a specific remote media track:

remoteAudioTrack.enabled = false

Mute video

You can disable/enable video during a call:

self.session?.mediaStreamManager?.localVideoTrack?.enabled = !(self.session?.mediaStreamManager?.localVideoTrack!.enabled)!


Due to webrtc restrictions black frames will be placed into stream content if video is disabled.

Switch camera

You can switch the video capture position during a call (default: front camera):


Audio session management

//not supported

Initialization and deinitialization

//not supported

Audio output

func switchSpeaker(_ sender: UIButton) {
let isCurrentSpeaker: Bool = !AVAudioSession.sharedInstance().currentRoute.outputs.filter{$0.portType == AVAudioSession.Port.builtInSpeaker }.isEmpty
let port = isCurrentSpeaker ? AVAudioSession.PortOverride.none: AVAudioSession.PortOverride.speaker
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(port)
} catch let error as NSError {
print("audioSession error: \(error.localizedDescription)")

Screen sharing

Screen sharing allows you to share information from your application to all of your opponents. It gives you an ability to promote your product, share a screen with formulas to students, distribute podcasts, share video/audio/photo moments of your life in real-time all over the world.


Due to Apple iOS restrictions screen sharing feature works only within an app you are using it in.

With iOS 11 Apple has introduced a new way to capture your in-app screen using ReplayKit’s RPScreenRecorder class. This is the most optimal way to share screen and requires minimum resources as this is handled by iOS itself.

// Coming soon


30 fps is a maximum webrtc can go, even though RPScreenRecorder supports 60, you must set it to 30 or lower.

WebRTC stats reporting

Stats reporting is an insanely powerful tool which can help to debug a call if there are any problems with it (e.g. lags, missing audio/video etc.).

// Coming soon

By calling statsString you will receive a generic report string, which will contain the most useful data to debug a call, example:

CN 565ms | local->local/udp | (s)248Kbps | (r)869Kbps
VS (input) 640x480@30fps | (sent) 640x480@30fps
VS (enc) 279Kbps/260Kbps | (sent) 200Kbps/292Kbps | 8ms | H264
AvgQP (past 30 encoded frames) = 36
VR (recv) 640x480@26fps | (decoded)27 | (output)27fps | 827Kbps/0bps | 4ms
AS 38Kbps | opus
AR 37Kbps | opus | 168ms | (expandrate)0.190002
Packets lost: VS 17 | VR 0 | AS 3 | AR 0


CN - connection info, VS - video sent, VR - video received, AvgQP - average quantization parameter (only valid for video; it is calculated as a fraction of the current delta sum over the current delta of encoded frames; low value corresponds with good quality; the range of the value per frame is defined by the codec being used), AS - audio sent, AR - audio received.

You can also use stats reporting to see who is currently talking in a group call. You must use audioReceivedOutputLevel for that.

Take a look to the CYBCallStatsReport header file to see all of the other stats properties that might be useful for you.

Receive a call in background (CallKit)

For mobile apps, it can be a situation when an opponent’s user app is either in closed (killed) or background (inactive) state.

In this case, to be able to still receive a call request, you can use Push Notifications. The flow should be as follows:

  • a call initiator should send a push notification along with a call request;
  • when an opponent’s app is killed or in background state - an opponent will receive a push notification about an incoming call, and will be able to accept/reject the call. If accepted or pressed on a push notification - an app will be opened, a user should auto login and connect to chat and then will be able to join an incoming call;

Please refer to Push Notifications API guides regarding how to integrate Push Notifications in your app.

Apple CallKit using VOIP push notifications

ConnectyCubeCalls fully supports Apple CallKit. In this block, we will guide you through the most important things you need to know when integrating CallKit into your application (besides those Apple has already provided in the link above).

Project preparations

In your Xcode project, make sure that your app supports Voice over IP services. For that open your Info.plist and make sure you have a specific line in Required background modes array:

VOIP background mode

Now you are ready to integrate CallKit methods using Apple’s guide here.

//not supported

Group video calls

Because of Mesh architecture we use for multipoint where every participant sends and receives its media to all other participants, current solution supports group calls with up to 4 people.

Also ConnectyCube provides an alternative solution for up to 12 people - Multiparty Video Conferencing API.


//not supported

Settings and configuration

You can change different settings for your calls using WebRTCConfig (CYBCallConfig v1) class. All of them are listed below:

Answer time interval

If an opponent did not answer you within dialing time interval, then onUserNotAnswer: and then onConnectionClosedForUser: delegate methods will be called.

Default value: 60 seconds

Minimum value: 10 seconds

WebRTCConfig().answerTimeInterval = 45

Dialing time interval

Indicates how often we send notifications to your opponents about your call.

Default value: 5 seconds

Minimum value: 3 seconds

WebRTCConfig().dialingTimeInterval = 5

DTLS (Datagram Transport Layer Security)

Datagram Transport Layer Security (DTLS) is used to provide communications privacy for datagram protocols. This fosters a secure signaling channel that cannot be tampered with. In other words, no eavesdropping or message forgery can occur on a DTLS encrypted connection.

DTLS is enabled by default.

// DTLS is enabled by default

Custom ICE servers

You can customize a list of ICE servers. By default ConnectyCubeCalls will use internal ICE servers which is usually enough, but you can always set your own.

Q: How does WebRTC select which TURN server to use if multiple options are given?

A: During the connectivity checking phase, WebRTC will choose the TURN relay with the lowest round-trip time. Thus, setting multiple TURN servers allows your application to scale-up in terms of bandwidth and number of users.

let username = "turn_login"
let password = "turn_password"
let serverStun = ConnectycubeIceServer(uri: "stun:stun.randomserver.example")
let serverTurn = ConnectycubeIceServer(uri: "turn:turn.randomserver.example", userName: username, password: password)

Video codecs

You can choose video codecs from available values:

  • WebRTCMediaConfig.VideoCodec.vp8: VP8 video codec - WebRTCMediaConfig.VideoCodec.vp9: VP9 video codec - WebRTCMediaConfig.VideoCodec.h264: h264 high video codec

VP8 is software supported video codec on Apple devices, which means it is the most demanding among all available ones.

VP9 is an improved version of VP8. It has better compression performance, improved rate control, and more efficient coding tools, which allow for higher-quality video at lower bitrates.

H264 is hardware supported video codec, which means that it is the most optimal one for use when performing video codec, using hardware acceleration you can always gurantee the best performance when encoding and decoding video frames. There are two options available:

  • baseline: the most suited one for video calls as it has low cost (default value)
  • high: mainly suited for broadcast to ensure you have the best picture possible. Takes more resources to encode/decode for the same resolution you set
WebRTCMediaConfig().videoCodec = .h264


This will set your preferred codec, as webrtc will always choose the most suitable one for both sides in call through negotiations.

Video quality

Video quality depends on hardware you use. iPhone 4s will not handle FullHD rendering, but iPhone 6+ will. It also depends on network you use and how many connections you have. For multi-calls set lower video quality. For 1 to 1 calls you can set higher quality.

WebRTCMediaConfig().videoWidth = WebRTCMediaConfig.VideoQuality.hdVideo.width
WebRTCMediaConfig().videoHeight = WebRTCMediaConfig.VideoQuality.hdVideo.height

WebRTC has auto scaling of video resolution and quality to keep network connection active. To get best quality and performance you should use h264-baseline codec as your preferred one.

  1. If some opponent user in call does not support h264, then automatically VP8 will be used.
  2. If both caller and callee have h264 support, then h264 will be used.

Audio codecs

You can choose audio codecs from available values:

  • WebRTCMediaConfig.AudioCodec.opus
  • WebRTCMediaConfig.AudioCodec.isac

Default value: WebRTCMediaConfig.AudioCodec.isac

WebRTCMediaConfig().audioCodec = .opus


In the latest versions of Firefox and Chrome this codec is used by default for encoding audio streams. This codec is relatively new (released in 2012). It implements lossy audio compression. Opus can be used for both low and high bitrates.

Supported bitrate: constant and variable, from 6 kbit/s to 510 kbit/s Supported sampling rates: from 8 kHz to 48 kHz.

If you develop a Calls application that is supposed to work with high-quality audio, the only choice on audio codecs is OPUS.

OPUS has the best quality, but it also requires a good internet connection.


This codec was developed specially for VoIP applications and streaming audio.

Supported bitrates: adaptive and variable. From 10 kbit/s to 52 kbit/s. Supported sampling rates: 32 kHz.

Good choice for the voice data, but not nearly as good as OPUS.


This audio codec is well-known, it was released in 2004, and became part of the WebRTC project in 2011 when Google acquired Global IP Solutions (the company that developed iLIBC).

When you have very bad channels and low bandwidth, you definitely should try iLBC — it should be strong on such cases.

Supported bitrates: fixed bitrate. 15.2 kbit/s or 13.33 kbit/s Supported sampling rate: 8 kHz.


When you have a strong reliable and good internet connection, then use OPUS. If you use Calls on 3g networks, use iSAC. If you still have problems, try iLBC.

Bandwidth cap

In a case of low bandwidth network, you can try to limit the call bandwidth cap to get better quality vs stability results:

WebRTCMediaConfig().audioStartBitrate = 50
WebRTCMediaConfig().videoStartBitrate = 256