Video Calling

ConnectyCube Video Calling P2P API is built on top of WebRTC protocol and based on top of WebRTC Mesh architecture.

Max people per P2P call is 4.

To get a difference between P2P calling and Conference calling please read our ConnectyCube Calling API comparison blog page.

Get started with SDK

Follow the Getting Started guide on how to connect ConnectyCube SDK and start building your first app.

Code sample

There is ready-to-go FREE P2P Calls Sample to help you better understand how to integrate video calling capabilities in your apps.


Required preparations for supported platforms


Add the following entries to your Info.plist file, located in <project root>/ios/Runner/Info.plist:

<string>$(PRODUCT_NAME) Camera Usage!</string>
<string>$(PRODUCT_NAME) Microphone Usage!</string>

This entries allow your app to access the camera and microphone.


Ensure the following permission is present in your Android Manifest file, located in <project root>/android/app/src/main/AndroidManifest.xml:

<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />

If you need to use a Bluetooth device, please add:

<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />

The Flutter project template adds it, so it may already be there.

Also you will need to set your build settings to Java 8, because official WebRTC jar now uses static methods in EglBase interface. Just add this to your app level build.gradle:

android {
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8

If necessary, in the same build.gradle you will need to increase minSdkVersion of defaultConfig up to 18 (currently default Flutter generator set it to 16).


Add the following entries to your *.entitlements files, located in <project root>/macos/Runner:


This entries allow your app to access the internet, microphone, and camera.


It does not require any special preparations.


It does not require any special preparations.

P2PClient setup

ConnectyCube Chat API is used as a signaling transport for Video Calling API, so in order to start using Video Calling API you need to connect to Chat.

To manage P2P calls in flutter you should use P2PClient. Please see code below to find out possible functionality.

P2PClient callClient = P2PClient.instance; // returns instance of P2PClient

callClient.init(); // starts listening of incoming calls
callClient.destroy(); // stops listening incoming calls and clears callbacks

// calls when P2PClient receives new incoming call
callClient.onReceiveNewSession = (incomingCallSession) {


// calls when any callSession closed
callClient.onSessionClosed = (closedCallSession) {


// creates new P2PSession 
callClient.createCallSession(callType, opponentsIds);

Create call session

In order to use Video Calling API you need to create a session object - choose your opponents with whom you will have a call and a type of session (VIDEO or AUDIO). P2PSession creates via P2PClient:

P2PClient callClient; //callClient created somewhere

Set<int> opponentsIds = {};
int callType = CallType.VIDEO_CALL; // or CallType.AUDIO_CALL

P2PSession callSession = callClient.createCallSession(callType, opponentsIds);

Add listeners

Below described main helpful callbacks and listeners:

callSession.onLocalStreamReceived = (mediaStream) {
  // called when local media stream completely prepared 

callSession.onRemoteStreamReceived = (callSession, opponentId, mediaStream) {
  // called when remote media stream received from opponent

callSession.onRemoteStreamRemoved = (callSession, opponentId, mediaStream) {
  // called when remote media was removed

callSession.onUserNoAnswer = (callSession, opponentId) {
  // called when did not receive an answer from opponent during timeout (default timeout is 60 seconds)

callSession.onCallRejectedByUser = (callSession, opponentId, [userInfo]) {
  // called when received 'reject' signal from opponent

callSession.onCallAcceptedByUser = (callSession, opponentId, [userInfo]){
  // called when received 'accept' signal from opponent

callSession.onReceiveHungUpFromUser = (callSession, opponentId, [userInfo]){
  // called when received 'hungUp' signal from opponent

callSession.onSessionClosed = (callSession){
  // called when current session was closed

Initiate a call

Map<String, String> userInfo = {};


The userInfo is used to pass any extra parameters in the request to your opponents.

After this, your opponents will receive a new call session in callback:

callClient.onReceiveNewSession = (incomingCallSession) {


Accept a call

To accept a call the following code snippet is used:

Map<String, String> userInfo = {}; // additional info for other call members

After this, your opponents will get a confirmation in the following callback:

callSession.onCallAcceptedByUser = (callSession, opponentId, [userInfo]){


Also, both the caller and opponents will get a special callback with the remote stream:

callSession.onRemoteStreamReceived = (callSession, opponentId, mediaStream) {
  // create video renderer and set media stream to it
  RTCVideoRenderer streamRender = RTCVideoRenderer();
  await streamRender.initialize();
  streamRender.srcObject = mediaStream;
  streamRender.objectFit = RTCVideoViewObjectFit.RTCVideoViewObjectFitCover;

  // create view to put it somewhere on screen
  RTCVideoView videoView = RTCVideoView(streamRender);

From this point, you and your opponents should start seeing each other.

Receive a call in background

For mobile apps, it can be a situation when an opponent's user app is either in closed (killed) or background (inactive) state.

In this case, to be able to still receive a call request, you can use Push Notifications. The flow should be as follows:

  • a call initiator should send a push notification along with a call request
  • when an opponent's app is killed or in background state - an opponent will receive a push notification about an incoming call, and will be able to accept/reject the call. If accepted or pressed on a push notification - an app will be opened, a user should auto login and connect to chat and then will be able to join an incoming call.

Please refer to Push Notifications API guides regarding how to integrate Push Notifications in your app.

For even better integration - CallKit and VoIP push notifications can be used. Please check CallKit and VoIP push notifications section on Push Notifications API guide page.

Reject a call

Map<String, String> userInfo = {}; // additional info for other call members


After this, the caller will get a confirmation in the following callback:

callSession.onCallRejectedByUser = (callSession, opponentId, [userInfo]) {


Sometimes, it could a situation when you received a call request and want to reject it, but the call session object has not arrived yet. It could be in a case when you integrated CallKit to receive call requests while an app is in background/killed state. To do a reject in this case, the following snippet can be used:

String callSessionId; // the id of incoming call session
Set<int> callMembers; // the ids of all call members including the caller and excluding the current user
Map<String, String> userInfo = {}; // additional info about performed action (optional)

rejectCall(callSessionId, callMembers, userInfo: userInfo);

End a call

Map<String, String> userInfo = {}; // additional info for other call members


After this, the opponents will get a confirmation in the following callback:

callSession.onReceiveHungUpFromUser = (callSession, opponentId, [userInfo]){


Monitor session connections state

(coming soon)

Mute audio

bool mute = true; // false - to unmute, default value is false

Switch audio output

bool enabled = false; // true - to switch to sreakerphone, default value is false

Mute video

bool enabled = false; // true - to enable local video track, default value for video calls is true

Switch video cameras

  if(isFrontCameraSelected) {
    // front camera selected
  } else {
    // back camera selected
}).catchError((error) {
  // switching camera failed

Get available cameras list

var cameras = callSession.getCameras(); // call only after starting the call

Use the custom media stream

MediaStream customMediaStream;


Toggle the torch

var enable = true; // false - to disable the torch


Screen Sharing

The Screen Sharing feature allows you to share the screen from your device to other call members. Currently the Connectycube Flutter SDK supports the Screen Sharing feature for the next platforms:

  • Android;
  • iOS (In-app + Screen broadcast);
  • Web;
  • macOS;
  • Windows;

For switching to the screen sharing during the call use next code snippet:

P2PSession callSession; // the existing call session

callSession.enableScreenSharing(true); // for switching to the screen sharing

callSession.enableScreenSharing(false); // for switching to the camera streaming

Android specifics of targeting the targetSdkVersion to the version 31 and above

After updating the targetSdkVersion to the version 31 you can encounter an error:

java.lang.SecurityException: Media projections require a foreground service of type ServiceInfo.FOREGROUND_SERVICE_TYPE_MEDIA_PROJECTION

To avoid it do the next changes and modifications in your project:

1.  Connect the flutter_background plugin to your project using:

flutter_background: ^x.x.x

2.  Add to the file app_name/android/app/src/main/AndroidManifest.xml to section manifest next permissions:

<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS" />

3.  Add to the file app_name/android/app/src/main/AndroidManifest.xml to section application next service:


4.  Create the next function somewhere in your project:

Future<bool> initForegroundService() async {
    final androidConfig = FlutterBackgroundAndroidConfig(
      notificationTitle: 'App name',
      notificationText: 'Screen sharing is in progress',
      notificationImportance: AndroidNotificationImportance.Default,
      notificationIcon: AndroidResource(
        name: 'ic_launcher_foreground',
        defType: 'drawable'),
   return FlutterBackground.initialize(androidConfig: androidConfig);

and call it somewhere after the initialization of the app or before starting the screen sharing.

5.  Call the function FlutterBackground.enableBackgroundExecution() just before starting the screen sharing and function FlutterBackground.disableBackgroundExecution() after ending the screen sharing or finishing the call.

IOS screen sharing using the Screen Broadcasting feature.

The Connectycube Flutter SDK supports two types of Screen sharing on the iOS platform. There are In-app screen sharing and Screen Broadcasting. The In-app screen sharing doesn't require any additional preparation on the app side. But the Screen Broadcasting feature requires some.

All required features we already added to our P2P Calls sample.

Below is the step-by-step guide on adding it to your app. It contains the following steps:

  1. Add the Broadcast Upload Extension;
  2. Add required files from our sample to your iOS project;
  3. Update project configuration files with your credentials;

Add the Broadcast Upload Extension

For creating the extension you need to add a new target to your application, selecting the Broadcast Upload Extension template. Fill in the desired name, change the language to Swift, make sure Include UI Extension (see screenshot) is not selected, as we don't need custom UI for our case, then press Finish. You will see that a new folder with the extension's name was added to the project's tree, containing the SampleHandler.swift class. Also, make sure to update the Deployment Info, for the newly created extension, to iOS 14 or newer. To learn more about creating App Extensions check the official documentation.

Broadcast Upload Extension

Add the required files from our sample to your own iOS project

After adding the extension you should add prepared files from our sample to your own project. Copy next files from our Broadcast Extension directory: Atomic.swift, Broadcast Extension.entitlements (the name can be different according to your extension's name), DarwinNotificationCenter.swift, SampleHandler.swift (replace the automatically created file), SampleUploader.swift, SocketConnection.swift. Then open your project in Xcode and link these files with your iOS project using Xcode tools. For it, call the context menu of your extension directory, select 'Add Files to "Runner"...' (see screenshot) and select files copied to your extension directory before.

Sync Broadcast Upload Extension files

Update project configuration files

Do the following for your iOS project configuration files:

1.  Add both the app and the extension to the same App Group. For it, add to both (app and extension) *.entitlements files next lines:


where the group.com.connectycube.flutter is your App group. To learn about working with app groups, see Adding an App to an App Group. We recommend you create the app group in the Apple Developer Console before.

Next, add the App group id value to the app's Info.plist of your app for the RTCAppGroupIdentifier key:


where the group.com.connectycube.flutter is your App group.

2.  Add a new key RTCScreenSharingExtension to the app's Info.plist with the extension's Bundle Identifier as the value:


where the com.connectycube.flutter.p2p-call-sample.app.Broadcast-Extension is the Bundle ID of your Broadcast Extension. Take it from the Xcode

Broadcast Extension Bundle ID

3.  Update SampleHandler.swift's appGroupIdentifier constant with the App Group name your app and extension are both registered to.

static let appGroupIdentifier = "group.com.connectycube.flutter"

where the group.com.connectycube.flutter is your app group.

4.  Make sure voip is added to UIBackgroundModes, in the app's Info.plist, in order to work when the app is in the background.


After performing mentioned actions you can switch to Screen sharing during the call using useIOSBroadcasting = true:

_callSession.enableScreenSharing(true, useIOSBroadcasting: true);

Requesting desktop capture source

Desktop platforms require the capture source (Screen or Window) for screen sharing. We prepared a widget ready for using that requests the available sources from the system and provides them to a user for selection. After that, you can use it as the source for screen sharing.

In code it can look in a next way:

var desktopCapturerSource = isDesktop
    ? await showDialog<DesktopCapturerSource>(
      context: context,
      builder: (context) => ScreenSelectDialog())
    : null;

callSession.enableScreenSharing(true, desktopCapturerSource: desktopCapturerSource);

The default capture source (usually it is the default screen) will be captured if set null as a capture source for the desktop platform.

WebRTC Stats reporting

Stats reporting is an insanely powerful tool that can provide detailed info about a call. There is info about the media, peer connection, codecs, certificates, etc. To enable stats report you should first set stats reporting frequency using RTCConfig.

RTCConfig.instance.statsReportsInterval = 200; // receive stats report every 200 milliseconds

Then you can subscribe to the stream with reports using the instance of the call session:

_callSession.statsReports.listen((event) {
  var userId = event.userId; // the user's id the stats related to
  var stats = event.stats;   // available stats

To disable fetching Stats reports set this parameter as 0.

Monitoring mic level and video bitrate using Stats

Also, we prepared the helpful manager CubeStatsReportsManager for processing Stats reports and getting some helpful information like the opponent's mic level and video bitrate.

For its work, you just need to configure the RTCConfig as described above. Then create the instance of CubeStatsReportsManager and initialize it with the call session.

final CubeStatsReportsManager _statsReportsManager = CubeStatsReportsManager();


After that you can subscribe on the interested data:

_statsReportsManager.micLevelStream.listen((event) { 
  var userId = event.userId;
  var micLevel = event.micLevel; // the mic level from 0 to 1

_statsReportsManager.videoBitrateStream.listen((event) { 
  var userId = event.userId;
  var bitRate = event.bitRate; // the video bitrate in kbits/sec

After finishing the call you should dispose of the manager for avoiding memory leaks. You can do it in the onSessionClosed callback:

void _onSessionClosed(session) {
  // ...


  // ..


ConnectyCube Flutter SDK provides possibility to change some default parameters for call session.

Media stream configurations

Use instance of RTCMediaConfig class to change some default media stream configs.

RTCMediaConfig mediaConfig = RTCMediaConfig.instance;
mediaConfig.minHeight = 720; // sets preferred minimal height for local video stream, default value is 360 
mediaConfig.minWidth = 1280; // sets preferred minimal width for local video stream, default value is 640 
mediaConfig.minFrameRate = 30; // sets preferred minimal framerate for local video stream, default value is 30 

Call connection configurations

Use instance of RTCConfig class to change some default call connection configs.

RTCConfig config = RTCConfig.instance;
config.noAnswerTimeout = 90; // sets timeout in seconds before stop dilling to opponents, default value is 60
config.dillingTimeInterval = 5; // time interval in seconds between 'invite call' packages, default value is 3 seconds, min value is 3 seconds
config.statsReportsInterval = 300; // the time interval in milliseconds of periodic fetching reports, default value is 0 (disabled)


(coming soon)