Skip to content

Video Calling

ConnectyCube Video Calling P2P API is built on top of WebRTC protocol and based on top of WebRTC Mesh architecture.

Max people per P2P call is 4.

To get a difference between P2P calling and Conference calling please read our ConnectyCube Calling API comparison blog page.

Get started with SDK

Follow the Getting Started guide on how to connect ConnectyCube SDK and start building your first app.

Code samples

There are ready-to-go FREE code samples to help you better understand how to integrate video calling capabilities in your apps:

Connect VideoChat SDK

To include video chat capabilities into your app you need to include the relevant dependencies in build.gradle project file (only for V1):

SDK v1 kotlin

dependencies {
implementation "com.connectycube:connectycube-android-sdk-videochat:x.x.x"
}

Preparations

Permissions

The video chat module requires camera, microphone, internet and storage permissions. Make sure you add relevant permissions to your app manifest:

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />

You can get more info on how to work with app permissions at Android Permissions Overview

Add signaling manager

ConnectyCube Chat API is used as a signaling transport for Video Calling API, so in order to start using Video Calling API you need to connect to Chat.

To be able to receive incoming video chat calls, you need to add WebRTC signaling to RTCClient:

//no need to add signaling manually

Prepare an Activity

To be able to receive callbacks about current RTCSession instance state, about video tracks (local and remotes) and session’s peer connections states
you must implement appropriate interfaces by calling the following methods on RTCSession instance:

fun addSessionStateCallbacksListener(callback: RTCSessionStateCallback<P2PSession>)
fun addVideoTrackCallbacksListener(callback: VideoTracksCallback<P2PSession>)

and also the following method on RTCClient or P2PCalls instance:

fun addSessionCallbacksListener(callback: RTCSessionEventsCallback)

Setup views

Set up your layout views for remote and local video tracks:

<com.connectycube.webrtc.RTCSurfaceView
android:id="@+id/opponentView"
android:layout_width="100dp"
android:layout_height="100dp" />

RTCSurfaceView allows using several views on screen layout and to overlap each other which is a good feature for group video calls.

RTCSurfaceView is a surface view (it extends org.webrtc.SurfaceViewRenderer class) that renderers video track. It has its own lifecycle for rendering. It uses init() method for preparing to render and release() to release resource when video track does not exist any more.

RTCSurfaceView is automatically initialized after the surface is created - in surfaceCreated() method callback. You can manually initialize RTCSurfaceView using Egl context getting from RTCClient. Use this method only when Activity is alive and GL resources exist:

val surfaceView: RTCSurfaceView = ...
val eglContext = EglBaseContext.getEglContext()
surfaceView.init(eglContext.eglBaseContext, null)

Method release() should be called when video track is no more valid, for example, when you receive onConnectionClosedForUser() callback from RTCSession or when RTCSession is going to close. But you should call release() method before Activity is destroyed and while EGLContext is still valid. If you don’t call this method, the GL resources might leak.

Here is the RTCSurfaceView interface:

RTCSurfaceView.init(EglBase.Context, RendererCommon.RendererEvents) //Initialize this view using webrtc Egl context, It is allowed to call init() to reinitialize the view after a previous init()/release() cycle.
RTCSurfaceView.release() // releases all related GL resources
RTCSurfaceView.setScalingType(RendererCommon.ScalingType) //Set how the video will fill the allowed layout area
RTCSurfaceView.setMirror(Boolean) //Set if the video stream should be mirrored or not.
RTCSurfaceView.requestLayout() // Request to invalidate view when something has changed

To render received video track from an opponent use the following snippet:

fun fillVideoView(userId: Int,
videoView: RTCSurfaceView,
videoTrack: ConnectycubeVideoTrack
) {
videoTrack.addSink(videoView.videoSink)
}

Notify RTCClient you are ready for processing calls

As soon as your app is ready for calls processing and activity exists, use the following snippet in activity class:

//do nothing

!> Pay attention: if you forget to add signaling manager, you will not be able to process calls.

Initiate a call

To call users you should create a session and start a call:

val opponents: MutableList<Int> = ArrayList()
opponents.add(21)
// User can pass an additional info along with the call request
val userInfo: HashMap<String, String> = HashMap()
userInfo["key1"] = "value1"
//Init session
val session = P2PCalls.createSession(opponents, CallType.VIDEO)
session.startCall(userInfo)

After this, your opponents will receive a call request callback onReceiveNewSession via RTCClientSessionCallbacks (read below).

Track session callbacks

For managing all session’s states you need to implement interface RTCClientSessionCallbacks.

// implement interface RTCCallSessionCallback
P2PCalls.addSessionCallbacksListener(this)
P2PCalls.removeSessionCallbacksListener(this)

Once you called RTCClient.getInstance(this).prepareToProcessCalls() method and added an instance of class, that implements RTCClientSessionCallbacks, to RTCClient, via method RTCClient.getInstance(this).addSessionCallbacksListener(listener), you should start receiving sessions callbacks.

The interface of RTCClientSessionCallbacks is the following:

/**
* Called each time when new session request is received.
*/
fun onReceiveNewSession(session: P2PSession)
/\*\*
- Called in case when user didn't answer within timer expiration period
\*/
override fun onUserNotAnswer(session: P2PSession, opponentId: Int) {}
/\*\*
- Called in case when opponent has rejected your call
\*/
override fun onCallRejectByUser((session: P2PSession, opponentId: Int, userInfo: Map<String, String?>?) {}
/\*\*
- Called in case when opponent has accepted your call
\*/
override fun onCallAcceptByUser(session: P2PSession, opponentId: Int, userInfo: Map<String, String?>?) {}
/\*\*
- Called in case when opponent hung up
\*/
override fun onReceiveHangUpFromUser(session: P2PSession, opponentId: Int, userInfo: Map<String, String?>?) {}
/\*\*
- Called in case when user didn't make any actions on received session
\*/
override fun onUserNoActions(session: P2PSession, userId: Int) {}
/\*\*
- Called in case when session will close
\*/
override fun onSessionStartClose(session: P2PSession) {}
/\*\*
- Called when session is closed.
\*/
override fun onSessionClosed(session: P2PSession) {}

Accept a call

You will receive all incoming call requests in RTCClientSessionCallbacks.onReceiveNewSession(session) callback.

There are a few ways how to proceed:

  • accept incoming call;
  • reject incoming call.

To accept the call request use the following code snippet:

// RTCCallSessionCallback
override fun onReceiveNewSession(session: P2PSession) {
// obtain received user info
// val userInfo = session.getUserInfo()
// set your user info if needed
val userInfo = HashMap<String, String>()
userInfo["key1"] = "value1"
// Accept the incoming call
session.acceptCall(userInfo)
}

After this, your opponent will receive an accept callback:

// RTCSessionEventsCallback
override fun onCallAcceptByUser(session: P2PSession,
opponentId: Int,
userInfo: Map<String, String?>?) {
}

Render video stream to view

For managing video tracks you need to implement RTCClientVideoTracksCallbacks interface:

p2pSession.addVideoTrackCallbacksListener(this)
p2pSession.removeVideoTrackCallbacksListener(this)
/**
* Called when local video track is received
*/
override fun onLocalVideoTrackReceive(session: P2PSession,
videoTrack: ConnectycubeVideoTrack
) {}
/\*\*
- Called when remote video track is received
\*/
override fun onRemoteVideoTrackReceive(session: P2PSession,
videoTrack: ConnectycubeVideoTrack,
userId: Int
) {}

Once you’ve got an access to video track, you can render them to some view in your app UI:

private fun fillVideoView(userId: Int,
videoView: RTCSurfaceView,
videoTrack: ConnectycubeVideoTrack,
remoteRenderer: Boolean
) {
videoTrack.addSink(videoView.videoSink)
updateVideoView(videoView, !remoteRenderer, ScalingType.SCALE_ASPECT_FILL)
}
private fun updateVideoView(surfaceView: RTCSurfaceView,
mirror: Boolean,
scalingType: ScalingType
) {
surfaceView.setScalingType(scalingType)
surfaceView.setMirror(mirror)
surfaceView.requestLayout()
}

Obtain audio tracks

To get an access to audio tracks you need to implement RTCClientAudioTracksCallback interface:

p2pSession.addAudioTrackCallbacksListener(this)
p2pSession.removeAudioTrackCallbacksListener(this)
/**
* Called when local audio track is received
*/
override fun onLocalAudioTrackReceive(session: P2PSession, audioTrack: ConnectycubeAudioTrack) {}
/\*\*
- Called when remote audio track is received
\*/
override fun onRemoteAudioTrackReceive(session: P2PSession,
audioTrack: ConnectycubeAudioTrack,
userId: Int
) {}

Then you can use these audio tracks to mute/unmute audio. Read more below.

Receive a call in background

For mobile apps, it can be a situation when an opponent’s user app is either in closed (killed) or background (inactive) state.

In this case, to be able to still receive a call request, you can use Push Notifications. The flow should be as follows:

  • a call initiator should send a push notification along with a call request;
  • when an opponent’s app is killed or in background state - an opponent will receive a push notification about an incoming call, and will be able to accept/reject the call. If accepted or pressed on a push notification - an app will be opened, a user should auto login and connect to chat and then will be able to join an incoming call;

Please refer to Push Notifications API guides regarding how to integrate Push Notifications in your app.

Reject a call

To reject a call request just use the following method:

// RTCCallSessionCallback
override fun onReceiveNewSession(session: P2PSession) {
// obtain received user info
// val userInfo = session.userInfo
// set your user info if needed
val userInfo = HashMap<String, String>()
userInfo["key1"] = "value1"
// Rejecting the incoming call
session.rejectCall(userInfo)
}

After this, your opponent will receive a reject callback:

// RTCSessionEventsCallback
override fun onCallRejectByUser(session: P2PSession,
opponentId: Int,
userInfo: Map<String, String?>?
) {}

End a call

To end a call use the following snippet:

// set your user info if needed
val userInfo = HashMap<String, String>()
userInfo["key1"] = "value1"
session.hangUp(userInfo)

After this, your opponent will receive a hang up callback:

override fun onReceiveHangUpFromUser(session: P2PSession, opponentId: Int, userInfo: Map<String, String?>?) {}

Release resource

When you don’t want to receive and process video calls anymore - you need to destroy RTCClient:

EglBaseContext.release()
P2PCalls.destroy()
//P2PCalls.register() to init

This method unregisters RTCClient from receiving any video chat events, clear session callbacks and closes existing signaling channels.

Monitor session connection state

To monitor the states of your peer connections (users) you need to implement RTCSessionStateCallback interface:

// RTCSessionStateCallback
p2pSession.addSessionStateCallbacksListener(this)
p2pSession.removeSessionStateCallbacksListener(this)
/**
* Called in case when connection with the opponent is established
*/
override fun onConnectedToUser(session: P2PSession, userId: Int){}
/\*\*
- Called in case when connection is closed
\*/
override fun onConnectionClosedForUser(session: P2PSession, userId: Int) {}
/\*\*
- Called in case when the opponent is disconnected
\*/
override fun onDisconnectedFromUser(session: P2PSession, userId: Int) {}
/\*\*
- Called in case when session state has changed
\*/
override fun onStateChanged(session: P2PSession, state: BaseSession.RTCSessionState) {}

Mute audio

val localAudioTrack: ConnectycubeAudioTrack? = session.mediaStreamManager.localAudioTrack
// mute
localAudioTrack.enabled = false
// unmute
localAudioTrack.enabled = true
// is muted?
val isEnabled = localAudioTrack.enabled

getMediaStreamManager() method returns an instance of RTCMediaStreamManager.

!> Pay attention, RTCMediaStreamManager is attached to RTCSession lifecycle. According to RTCSession lifecycle, you should use RTCMediaStreamManager only when RTCSession is active.

Mute video

val localVideoTrack: ConnectycubeVideoTrack? = session.mediaStreamManager.localVideoTrack
// mute
localVideoTrack.enabled = false
// unmute
localVideoTrack.enabled = true
// is muted?
val isEnabled = localVideoTrack.enabled

Switch camera

You can switch the video camera during a call (default is front camera):

val videoCapturer = session.mediaStreamManager.videoCapturer
videoCapturer.switchCamera(cameraSwitchHandler)

Screen sharing

Screen sharing allows you to share information from your application to all of your opponents. It gives you an ability to promote your product, share a screen with formulas to students, distribute podcasts, share video/audio/photo moments of your life in real-time all over the world.

!> Pay attention! Screen sharing feature works only on devices with Android 5 (LollyPop) and newer.

To simplify using this feature we prepared special RTCScreenCapturer class.

To implement this feature in your application you should do 3 simple steps:

1. Request projection permission from user:
//Coming soon
2. Listen to granted permission inside Activity (or Fragment):
//Coming soon
3. Set RTCScreenCapturer as current video capturer:
//Coming soon

!> Note! To create instance of RTCScreenCapturer need use data from permission request

Configure general settings

RTCConfig provides an interface to customize some SDK video chat settings:

/**
* add to list ice servers.
*/
WebRTCConfig.iceServerList.add(ConnectycubeIceServer(uri, userName, password))
//Coming soon
//DialingTimeInterval, AnswerTimeInterval

For example, you can customize a list of ICE servers that SDK uses. WebRTC engine will choose the TURN relay with the lowest round-trip time. Thus, setting multiple TURN servers allows your application to scale-up in terms of bandwidth and number of users:

WebRTCConfig.iceServerList.add(
ConnectycubeIceServer(
"turn:numb.default.com",
"default@default.com",
"default@default.com"
)
)

Configure media settings

You can use RTCMediaConfig class instance to configure a various list of media settings like video/audio codecs, bitrate, fps etc:

//WebRTCMediaConfig
var audioCodec: AudioCodec = AudioCodec.ISAC
var videoCodec: VideoCodec? = null
var videoWidth = 0
var videoHeight = 0
var videoFps = 0
var audioStartBitrate = 0
var videoStartBitrate = 0
var videoHWAcceleration = false
var useOpenSLES = false //Allow OpenSL ES audio if device supports it
var useBuildInAEC = true //Enable built-in AEC if device supports
var audioProcessingEnabled = true //Enabling/Disabling audio processing - added for audio performance

WebRTC Stats reporting

Stats reporting is an insanely powerful tool which can help to debug a call if there are any problems with it (e.g. lags, missing audio/video etc.). To enable stats report you should first set stats reporting frequency using RTCConfig method below:

//Coming soon
//ConnectycubeStatsReport

Now you will be able to receive a client delegate callback and perform operations with RTCStatsReport instance for the current period of time:

//Coming soon
//ConnectycubeStatsReport

You can also use stats reporting to see who is currently talking in a group call. You must use audioReceiveOutputLevel for that.

Take a look to the RTCStatsReport to see all of the other stats properties that might be useful for you.

Group video calls

Because of Mesh architecture we use for multipoint where every participant sends and receives its media to all other participants, current solution supports group calls with up to 4 people.

Also ConnectyCube provides an alternative solution for up to 12 people - Multiparty Video Conferencing API.

Call recording

Coming soon