Video Calling

ConnectyCube Video Calling API is built on top of WebRTC protocol.

It allows adding video calling features into your Web app and enabling a video calling function similar to Skype using the API easily.

ConnectyCube Chat API is used as a signaling transport for Video Calling API, so in order to start using Video Calling API you need to connect to Chat.

Minimum Android version supported is Android 4.1

Get started with SDK

Follow the Getting Started guide on how to connect ConnectyCube SDK and start building your first app.

Code samples

There are ready-to-go FREE code samples to help you better understand how to integrate video calling capabilities in your apps:

Connect VideoChat SDK

To include video chat capabilities into your app you need to include the relevant dependencies in build.gradle project file:

dependencies {
    implementation "com.connectycube:connectycube-android-sdk-videochat:x.x.x"



The video chat module requires camera, microphone, internet and storage permissions. Make sure you add relevant permissions to your app manifest:

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />

You can get more info on how to work with app permissions at Android Permissions Overview

Add signaling manager

To be able to receive incoming video chat calls, you need to add WebRTC signaling to RTCClient:

ConnectycubeChatService.getInstance().getVideoChatWebRTCSignalingManager().addSignalingManagerListener(new VideoChatSignalingManagerListener() {
    public void signalingCreated(Signaling signaling, boolean createdLocally) {
        if (!createdLocally) {
            RTCClient.getInstance(this).addSignaling((WebRTCSignaling) signaling);

Prepare an Activity

To be able to receive callbacks about current RTCSession instance state, about video tracks (local and remotes) and session's peer connections states you must implement appropriate interfaces by calling the following methods on RTCSession instance:

public void addSessionCallbacksListener(RTCSessionConnectionCallbacks callback)
public void addVideoTrackCallbacksListener(RTCClientVideoTracksCallbacks callback)

and also the following method on RTCClient instance:

public void addSessionCallbacksListener(RTCClientSessionCallbacks callback)

Setup views

Set up your layout views for remote and local video tracks:

    android:layout_height="100dp" />
    android:layout_height="100dp" />

RTCSurfaceView allows using several views on screen layout and to overlap each other which is a good feature for group video calls.

RTCSurfaceView is a surface view (it extends org.webrtc.SurfaceViewRenderer class) that renderers video track. It has its own lifecycle for rendering. It uses init() method for preparing to render and release() to release resource when video track does not exist any more.

RTCSurfaceView is automatically initialized after the surface is created - in surfaceCreated() method callback. You can manually initialize RTCSurfaceView using Egl context getting from RTCClient. Use this method only when Activity is alive and GL resources exist:

RTCSurfaceView surfaceView = ...;
EglBase eglContext = RTCClient.getInstance(getContext()).getEglContext();
surfaceView.init(eglContext.getEglBaseContext(), null);

Method release() should be called when video track is no more valid, for example, when you receive onConnectionClosedForUser() callback from RTCSession or when RTCSession is going to close. But you should call release() method before Activity is destroyed and while EGLContext is still valid. If you don't call this method, the GL resources might leak.

Here is the RTCSurfaceView interface:

RTCSurfaceView.init(EglBase.Context, RendererCommon.RendererEvents);//Initialize this view using webrtc Egl context, It is allowed to call init() to reinitialize the view after a previous init()/release() cycle.
RTCSurfaceView.release(); // releases all related GL resources
RTCSurfaceView.setScalingType(scalingType); //Set how the video will fill the allowed layout area
RTCSurfaceView.setMirror(mirror); //Set if the video stream should be mirrored or not.
RTCSurfaceView.requestLayout(); // Request to invalidate view when something has changed

To render received video track from an opponent use the following snippet:

private void fillVideoView(int userId, RTCSurfaceView videoView, RTCVideoTrack videoTrack) {
    videoTrack.addRenderer(new VideoRenderer(videoView));

Notify RTCClient you are ready for processing calls

As soon as your app is ready for calls processing and activity exists, use the following snippet in activity class:


Pay attention: if you forget to add signaling manager, you will not be able to process calls.

Initiate a call

To call users you should create a session and start a call:

List<Integer> opponents = new ArrayList<Integer>();

// User can pass an additional info along with the call request
Map<String, String> userInfo = new HashMap<>();
userInfo.put("key1", "value1");

//Init session
RTCSession session = RTCClient.getInstance(this).createNewSessionWithOpponents(opponents, CONFERENCE_TYPE_VIDEO);


After this, your opponents will receive a call request callback onReceiveNewSession via RTCClientSessionCallbacks (read below).

Track session callbacks

For managing all session's states you need to implement interface RTCClientSessionCallbacks.


Once you called RTCClient.getInstance(this).prepareToProcessCalls() method and added an instance of class, that implements RTCClientSessionCallbacks, to RTCClient, via method RTCClient.getInstance(this).addSessionCallbacksListener(listener), you should start receiving sessions callbacks.

The interface of RTCClientSessionCallbacks is the following:

 * Called each time when new session request is received.
void onReceiveNewSession(RTCSession session);

 * Called in case when user didn't answer within timer expiration period
void onUserNotAnswer(RTCSession session, Integer userID);

 * Called in case when opponent has rejected your call
void onCallRejectByUser(RTCSession session, Integer userID, Map<String, String> userInfo);

 * Called in case when opponent has accepted your call
void onCallAcceptByUser(RTCSession session, Integer userID, Map<String, String> userInfo);

 * Called in case when opponent hung up
void onReceiveHangUpFromUser(RTCSession session, Integer userID);

 * Called in case when user didn't make any actions on received session
void onUserNoActions(RTCSession session, Integer userID);

 * Called in case when session will close
void onSessionStartClose(RTCSession session);

 * Called when session is closed.
void onSessionClosed(RTCSession session);

Accept a call

You will receive all incoming call requests in RTCClientSessionCallbacks.onReceiveNewSession(session) callback.

There are a few ways how to proceed:

  • accept incoming call;
  • reject incoming call.

To accept the call request use the following code snippet:

// RTCClientSessionCallbacks
public void onReceiveNewSession(RTCSession session){
   // obtain received user info
   Map<String,String> userInfo = session.getUserInfo();

   // set your user info if needed
   Map<String,String> userInfo = new HashMap<>;
   userInfo.put("key1", "value1");

   // Accept the incoming call

After this, your opponent will receive an accept callback:

// RTCClientSessionCallbacks
public void onCallAcceptByUser(RTCSession session, Integer userID, Map<String, String> userInfo){


Render video stream to view

For managing video tracks you need to implement RTCClientVideoTracksCallbacks interface:

 * Called when local video track is received
void onLocalVideoTrackReceive(RTCSession session, RTCVideoTrack localVideoTrack);

 * Called when remote video track is received
void onRemoteVideoTrackReceive(RTCSession session, RTCVideoTrack remoteVideoTrack, Integer userID);

Once you've got an access to video track, you can render them to some view in your app UI:

private void fillVideoView(int userId, RTCSurfaceView videoView, RTCVideoTrack videoTrack, boolean remoteRenderer) {
    videoTrack.addRenderer(new VideoRenderer(videoView));
    updateVideoView(videoView, !remoteRenderer, RendererCommon.ScalingType.SCALE_ASPECT_FILL);

private void updateVideoView(RTCSurfaceView surfaceView, boolean mirror, RendererCommon.ScalingType scalingType){

Obtain audio tracks

To get an access to audio tracks you need to implement RTCClientAudioTracksCallback interface:

 * Called when local audio track is received
 void onLocalAudioTrackReceive(RTCSession session, RTCAudioTrack audioTrack);

 * Called when remote audio track is received
void onRemoteAudioTrackReceive(RTCSession session, RTCAudioTrack audioTrack, Integer userID);

Then you can use these audio tracks to mute/unmute audio. Read more below.

Reject a call

To reject a call request just use the following method:

// RTCClientSessionCallbacks
public void onReceiveNewSession(RTCSession session){
   // obtain received user info
   Map<String,String> userInfo = session.getUserInfo();

   // set your user info if needed
   Map<String,String> userInfo = new HashMap<String,String>;
   userInfo.put("key1", "value1");

   // Rejecting the incoming call

After this, your opponent will receive a reject callback:

// RTCClientSessionCallbacks
public void onCallRejectByUser(RTCSession session, Integer userID, Map<String, String> userInfo){


End a call

To end a call use the following snippet:

// set your user info if needed
Map<String,String> userInfo = new HashMap<String,String>;
userInfo.put("key1", "value1");


After this, your opponent will receive a hang up callback:

public void onReceiveHangUpFromUser(RTCSession session, Integer userID){


Release resource

When you don't want to receive and process video calls anymore - you need to destroy RTCClient:


This method unregisters RTCClient from receiving any video chat events, clear session callbacks and closes existing signaling channels.

Monitor session connection state

To monitor the states of your peer connections (users) you need to implement RTCSessionStateCallback interface:

 * Called in case when connection with the opponent is established
void onConnectedToUser(RTCSession session, Integer userID);

 * Called in case when connection is closed
void onConnectionClosedForUser(RTCSession session, Integer userID);

 * Called in case when the opponent is disconnected
void onDisconnectedFromUser(RTCSession session, Integer userID);

 * Called in case when connection establishment process is started
void onStartConnectToUser(RTCSession session, Integer userID);

 * Called in case when the opponent is disconnected by timeout
void onDisconnectedTimeoutFromUser(RTCSession session, Integer userID);

 * Called in case when connection has failed with the opponent
void onConnectionFailedWithUser(RTCSession session, Integer userID);

 * Called if some errors occurred during connection establishment process
void onError(RTCSession session, RTCException exception);

Mute audio

RTCAudioTrack localAudioTrack = currentSession.getMediaStreamManager().getLocalAudioTrack();

// mute

// unmute

// is muted?
boolean isEnabled = localAudioTrack.enabled();

getMediaStreamManager() method returns an instance of RTCMediaStreamManager.

Pay attention, RTCMediaStreamManager is attached to RTCSession lifecycle. According to RTCSession lifecycle, you should use RTCMediaStreamManager only when RTCSession is active.

Mute video

RTCVideoTrack localVideoTrack = currentSession.getMediaStreamManager().getLocalVideoTrack();

// mute

// mute

// is muted?
boolean isEnabled = localVideoTrack.enabled();

Switch camera

You can switch the video camera during a call (default is front camera):

RTCCameraVideoCapturer videoCapturer = (RTCCameraVideoCapturer)currentSession.getMediaStreamManager().getVideoCapturer();


Screen sharing

Coming soon

Configure general settings

RTCConfig provides an interface to customize some SDK video chat settings:

 * Set dialing time interval
 * Default value is 5 sec
public static void setDialingTimeInterval(long dialingTimeInterval);

 * Set answer time interval
 * Default value is 60 sec
public static void setAnswerTimeInterval(long answerTimeInterval);

 * Set max connections in conference
 * Default value is 10
public static void setMaxOpponentsCount(Integer maxOpponentsCount);

 * Set max allowed time to repair a connection after it was lost.
 * Default value is 10 sec
public static void setDisconnectTime(Integer disconnectTime);

 * Set list of ice servers.
public static void setIceServerList(List<PeerConnection.IceServer> iceServerList);

For example, you can customize a list of ICE servers that SDK uses. WebRTC engine will choose the TURN relay with the lowest round-trip time. Thus, setting multiple TURN servers allows your application to scale-up in terms of bandwidth and number of users:

List<PeerConnection.IceServer> iceServerList = new LinkedList<>();
iceServerList.add(new PeerConnection.IceServer("turn:numb.default.com", "default@default.com", "default@default.com"));

Configure media settings

You can use RTCMediaConfig class instance to configure a various list of media settings like video/audio codecs, bitrate, fps etc:

public static void setAudioCodec(AudioCodec audioCodec);

public static void setVideoCodec(VideoCodec videoCodec);

public static void setVideoWidth(int videoWidth);

public static void setVideoHeight(int videoHeight);

public static void setVideoFps(int videoFps);

public static void setVideoStartBitrate(int videoStartBitrate);

public static void setAudioStartBitrate(int audioStartBitrate);

public static void setVideoHWAcceleration(boolean videoHWAcceleration);

public static void setUseBuildInAEC(boolean useBuildInAEC); // Enable built-in AEC if device supports it

public static void setUseOpenSLES(boolean useOpenSLES); //Allow OpenSL ES audio if device supports it

public static void setAudioProcessingEnabled(boolean audioProcessingEnabled); //Enabling/Disabling audio processing - added for audio performance.

WebRTC Stats reporting

Coming soon

Group video calls

Because of Mesh architecture we use for multipoint where every participant sends and receives its media to all other participants, current solution supports group calls with up to 4 people.

Also ConnectyCube provides an alternative solution for up to 10 people. Please contact us via Contact form.

Call recording

Coming soon