Multiparty Video Conferencing feature overview

ConnectyCube Multiparty Video Conferencing API is built on top of WebRTC protocol and based on top of WebRTC SFU architecture.

Max people per Conference call is 12.

Video Conferencing is available starting from Hobby plan (you still can play with it on a Free plan).

To get a difference between P2P calling and Conference calling please read our ConnectyCube Calling API comparison blog page.

Features supported

  • Video/Audio Conference with up to 12 people
  • Join-Rejoin video room functionality (like Skype)
  • Guest rooms (coming soon)
  • Mute/Unmute audio/video streams
  • Display bitrate
  • Display mic level
  • Switch video input device (camera)
  • Switch audio input device (microphone)
  • Screen sharing

Get started with SDK

Follow the Getting Started guide on how to connect ConnectyCube SDK and start building your first app.

Code sample

There is ready-to-go FREE Conference Calls Sample to help you better understand how to integrate video calling capabilities in your apps.


Required preparations for supported platforms


Add the following entries to your Info.plist file, located in <project root>/ios/Runner/Info.plist:

<string>$(PRODUCT_NAME) Camera Usage!</string>
<string>$(PRODUCT_NAME) Microphone Usage!</string>

This entries allow your app to access the camera and microphone.


Ensure the following permission is present in your Android Manifest file, located in <project root>/android/app/src/main/AndroidManifest.xml:

<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />

If you need to use a Bluetooth device, please add:

<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />

The Flutter project template adds it, so it may already be there.

Also you will need to set your build settings to Java 8, because official WebRTC jar now uses static methods in EglBase interface. Just add this to your app level build.gradle:

android {
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8

If necessary, in the same build.gradle you will need to increase minSdkVersion of defaultConfig up to 18 (currently default Flutter generator set it to 16).


Add the following entries to your *.entitlements files, located in <project root>/macos/Runner:


This entries allow your app to access the internet, microphone, and camera.


It does not require any special preparations.


It does not require any special preparations.


It does not require any special preparations.

Create meeting

In order to have a conference call, a meeting object has to be created.

int now = DateTime.now().millisecondsSinceEpoch ~/ 1000;
CubeMeeting meeting = CubeMeeting()
  ..name = 'My meeting'
  ..attendees = [
    CubeMeetingAttendee(userId: 123, email: '...'),
    CubeMeetingAttendee(userId: 124, email: '...')
  ..startDate = now
  ..endDate = now + 60 * 60
  ..withChat = false
  ..record = false
  ..public = true
// ..notify = true //notify feature is available starting from [Hobby plan](https://connectycube.com/pricing/)
// ..notifyBefore = CubeMeetingNotifyBefore(TimeMetric.HOURS, 1) //notify feature is available starting from [Hobby plan](https://connectycube.com/pricing/)
  ..scheduled = false;

  .then((createdMeeting) {
    var confRoomId = createdMeeting.meetingId;
  .catchError((onError) {});  

Once meeting is created, you can use meeting.meetingId as a conf room identifier in the below requests when join a call.

ConferenceClient setup

To manage Conference calls in flutter you should use ConferenceClient. Please see code below to find out possible functionality.

ConferenceClient callClient = ConferenceClient.instance; // returns instance of ConferenceClient

Create call session

In order to use Conference Calling API you need to create a session object - set your current user and a type of session (VIDEO or AUDIO optional). ConferenceSession creates via ConferenceClient:

ConferenceClient callClient = ConferenceClient.instance;

int callType = CallType.VIDEO_CALL; // or CallType.AUDIO_CALL 

ConferenceSession callSession = await callClient.createCallSession(currentUserId, callType: callType);

Add listeners

callSession.onLocalStreamReceived = (mediaStream) {
  // called when local media stream completely prepared 

callSession.onRemoteStreamReceived = (callSession, opponentId, mediaStream) {
  // called when remote media stream received from opponent

callSession.onPublishersReceived = (publishers) {
  // called when received new opponents/publishers

callSession.onPublisherLeft= (publisher) {
  // called when opponent/publisher left room

callSession.onError= (ex) {
  // called when received some exception from conference

callSession.onSessionClosed = (callSession){
  // called when current session was closed

Also, there is RTCSessionStateCallback in callSession to manage connection state with user:

//CallClass implements RTCSessionStateCallback<ConferenceSession>

void onConnectedToUser(ConferenceSession session, int userId) {}

void onConnectionClosedForUser(ConferenceSession session, int userId) {}

void onDisconnectedFromUser(ConferenceSession session, int userId) {}

Join video room

Just join the room and for example, send invite to some opponent:

callSession.joinDialog(roomId, ((publishers) {
    startCall(roomId, opponents, callSession.currentUserId);// event by system message e.g.


When you get onPublishersReceived you can subscribe to the active publisher:


To unsubscribe from publisher:



To leave current room session:


Mute audio

bool mute = true; // false - to unmute, default value is false

Switch audio output

bool enabled = false; // true - to switch to sreakerphone, default value is false

Mute video

bool enabled = false; // true - to enable local video track, default value for video calls is true

Switch video cameras

  if(isFrontCameraSelected) {
    // front camera selected
  } else {
    // back camera selected
}).catchError((error) {
  // switching camera failed

Get available cameras list

var cameras = callSession.getCameras(); // call only after starting the call

Use the custom media stream

MediaStream customMediaStream;


Toggle the torch

var enable = true; // false - to disable the torch


Screen Sharing

The Screen Sharing feature allows you to share the screen from your device to other call members. Currently the Connectycube Flutter SDK supports the Screen Sharing feature for all supported platforms.

For switching to the screen sharing during the call use next code snippet:

ConferenceSession callSession; // the existing call session

callSession.enableScreenSharing(true); // for switching to the screen sharing

callSession.enableScreenSharing(false); // for switching to the camera streaming

Android specifics of targeting the targetSdkVersion to the version 31 and above

After updating the targetSdkVersion to the version 31 you can encounter an error:

java.lang.SecurityException: Media projections require a foreground service of type ServiceInfo.FOREGROUND_SERVICE_TYPE_MEDIA_PROJECTION

To avoid it do the next changes and modifications in your project:

1.  Connect the flutter_background plugin to your project using:

flutter_background: ^x.x.x

2.  Add to the file app_name/android/app/src/main/AndroidManifest.xml to section manifest next permissions:

<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS" />

3.  Add to the file app_name/android/app/src/main/AndroidManifest.xml to section application next service:


4.  Create the next function somewhere in your project:

Future<bool> initForegroundService() async {
    final androidConfig = FlutterBackgroundAndroidConfig(
      notificationTitle: 'App name',
      notificationText: 'Screen sharing is in progress',
      notificationImportance: AndroidNotificationImportance.Default,
      notificationIcon: AndroidResource(
        name: 'ic_launcher_foreground',
        defType: 'drawable'),
   return FlutterBackground.initialize(androidConfig: androidConfig);

and call it somewhere after the initialization of the app or before starting the screen sharing.

5.  Call the function FlutterBackground.enableBackgroundExecution() just before starting the screen sharing and function FlutterBackground.disableBackgroundExecution() after ending the screen sharing or finishing the call.

IOS screen sharing using the Screen Broadcasting feature.

The Connectycube Flutter SDK supports two types of Screen sharing on the iOS platform. There are In-app screen sharing and Screen Broadcasting. The In-app screen sharing doesn't require any additional preparation on the app side. But the Screen Broadcasting feature requires some.

All required features we already added to our P2P Calls sample.

Below is the step-by-step guide on adding it to your app. It contains the following steps:

  1. Add the Broadcast Upload Extension;
  2. Add required files from our sample to your iOS project;
  3. Update project configuration files with your credentials;

Add the Broadcast Upload Extension

For creating the extension you need to add a new target to your application, selecting the Broadcast Upload Extension template. Fill in the desired name, change the language to Swift, make sure Include UI Extension (see screenshot) is not selected, as we don't need custom UI for our case, then press Finish. You will see that a new folder with the extension's name was added to the project's tree, containing the SampleHandler.swift class. Also, make sure to update the Deployment Info, for the newly created extension, to iOS 14 or newer. To learn more about creating App Extensions check the official documentation.

Broadcast Upload Extension

Add the required files from our sample to your own iOS project

After adding the extension you should add prepared files from our sample to your own project. Copy next files from our Broadcast Extension directory: Atomic.swift, Broadcast Extension.entitlements (the name can be different according to your extension's name), DarwinNotificationCenter.swift, SampleHandler.swift (replace the automatically created file), SampleUploader.swift, SocketConnection.swift. Then open your project in Xcode and link these files with your iOS project using Xcode tools. For it, call the context menu of your extension directory, select 'Add Files to "Runner"...' (see screenshot) and select files copied to your extension directory before.

Sync Broadcast Upload Extension files

Update project configuration files

Do the following for your iOS project configuration files:

1.  Add both the app and the extension to the same App Group. For it, add to both (app and extension) *.entitlements files next lines:


where the group.com.connectycube.flutter is your App group. To learn about working with app groups, see Adding an App to an App Group. We recommend you create the app group in the Apple Developer Console before.

Next, add the App group id value to the app's Info.plist of your app for the RTCAppGroupIdentifier key:


where the group.com.connectycube.flutter is your App group.

2.  Add a new key RTCScreenSharingExtension to the app's Info.plist with the extension's Bundle Identifier as the value:


where the com.connectycube.flutter.p2p-call-sample.app.Broadcast-Extension is the Bundle ID of your Broadcast Extension. Take it from the Xcode

Broadcast Extension Bundle ID

3.  Update SampleHandler.swift's appGroupIdentifier constant with the App Group name your app and extension are both registered to.

static let appGroupIdentifier = "group.com.connectycube.flutter"

where the group.com.connectycube.flutter is your app group.

4.  Make sure voip is added to UIBackgroundModes, in the app's Info.plist, in order to work when the app is in the background.


After performing mentioned actions you can switch to Screen sharing during the call using useIOSBroadcasting = true:

_callSession.enableScreenSharing(true, useIOSBroadcasting: true);

Requesting desktop capture source

Desktop platforms require the capture source (Screen or Window) for screen sharing. We prepared a widget ready for using that requests the available sources from the system and provides them to a user for selection. After that, you can use it as the source for screen sharing.

In code it can look in a next way:

var desktopCapturerSource = isDesktop
    ? await showDialog<DesktopCapturerSource>(
      context: context,
      builder: (context) => ScreenSelectDialog())
    : null;

callSession.enableScreenSharing(true, desktopCapturerSource: desktopCapturerSource);

The default capture source (usually it is the default screen) will be captured if set null as a capture source for the desktop platform.

WebRTC Stats reporting

Stats reporting is an insanely powerful tool that can provide detailed info about a call. There is info about the media, peer connection, codecs, certificates, etc. To enable stats report you should first set stats reporting frequency using RTCConfig.

RTCConfig.instance.statsReportsInterval = 200; // receive stats report every 200 milliseconds

Then you can subscribe to the stream with reports using the instance of the call session:

_callSession.statsReports.listen((event) {
  var userId = event.userId; // the user's id the stats related to
  var stats = event.stats;   // available stats

To disable fetching Stats reports set this parameter as 0.

Monitoring mic level and video bitrate using Stats

Also, we prepared the helpful manager CubeStatsReportsManager for processing Stats reports and getting some helpful information like the opponent's mic level and video bitrate.

For its work, you just need to configure the RTCConfig as described above. Then create the instance of CubeStatsReportsManager and initialize it with the call session.

final CubeStatsReportsManager _statsReportsManager = CubeStatsReportsManager();


After that you can subscribe on the interested data:

_statsReportsManager.micLevelStream.listen((event) { 
  var userId = event.userId;
  var micLevel = event.micLevel; // the mic level from 0 to 1

_statsReportsManager.videoBitrateStream.listen((event) { 
  var userId = event.userId;
  var bitRate = event.bitRate; // the video bitrate in kbits/sec

After finishing the call you should dispose of the manager for avoiding memory leaks. You can do it in the onSessionClosed callback:

void _onSessionClosed(session) {
  // ...


  // ..


ConnectyCube Flutter SDK provides possibility to change some default parameters for call session.

Media stream configurations

Use instance of RTCMediaConfig class to change some default media stream configs.

RTCMediaConfig mediaConfig = RTCMediaConfig.instance;
mediaConfig.minHeight = 720; // sets preferred minimal height for local video stream, default value is 320 
mediaConfig.minWidth = 1280; // sets preferred minimal width for local video stream, default value is 480 
mediaConfig.minFrameRate = 30; // sets preferred minimal framerate for local video stream, default value is 25 
mediaConfig.maxBandwidth = 512; // sets initial maximum bandwidth in kbps, set to `0` or `null` for disabling the limitation, default value is 0

Call quality

Limit bandwidth

Despite WebRTC engine uses automatic quality adjustement based on available Internet bandwidth, sometimes it's better to set the max available bandwidth cap which will result in a better and smoother user experience. For example, if you know you have a bad internet connection, you can limit the max available bandwidth to e.g. 256 Kbit/s.

This can be done either when initiate a call via RTCMediaConfig.instance.maxBandwidth = 512 which will result in limiting the max vailable bandwidth for ALL participants or/and during a call:

callSession.setMaxBandwidth(256); // set some maximum value to increase the limit

which will result in limiting the max available bandwidth for current user only.

Call connection configurations

ConferenceConfig.instance.url = SERVER_ENDPOINT // 'wss://...:8989';

Signaling implementation

To implement regular calls with events such as call, reject, hang up there should be used some signaling mechanism.


As signaling mechanism there can be used ConnectyCube system-messages with predefined custom properties.

Start Call

Just join the room and send an invitation start call message to opponents:

 var systemMessagesManager = CubeChatConnection.instance.systemMessagesManager;

sendCallMessage(String roomId, List<int> participantIds) {
 List<CubeMessage> callMsgList = _buildCallMessages(roomId, participantIds);
 callMsgList.forEach((callMsg) {
   callMsg.properties["callStart"] = '1';
   callMsg.properties["participantIds"] = participantIds.join(',');
 callMsgList.forEach((msg) => systemMessagesManager.sendSystemMessage(msg));

List<CubeMessage> buildCallMessages(String roomId, List<int> participantIds) {
  return participantIds.map((userId) {
    var msg = CubeMessage();
    msg.recipientId = userId;
    msg.properties = {"janusRoomId": roomId};
    return msg;

Reject Call

Send reject message when decline/busy call:

sendRejectMessage(String roomId, bool isBusy, int participantId) {
  List<CubeMessage> callMsgList = buildCallMessages(roomId, [participantId]);
  callMsgList.forEach((callMsg) {
    callMsg.properties["callRejected"] = '1';
    callMsg.properties["busy"] = isBusy.toString();
  callMsgList.forEach((msg) => systemMessagesManager.sendSystemMessage(msg));

End call

Send end call message when hangup/answer_timeout call:

sendEndCallMessage(String roomId, List<int> participantIds) {
  List<CubeMessage> callMsgList = _buildCallMessages(roomId, participantIds);
  callMsgList.forEach((callMsg) {
    callMsg.properties["callEnd"] = '1';
  callMsgList.forEach((msg) => systemMessagesManager.sendSystemMessage(msg));

Get call events

Listen and parse all call events with systemMessagesManager:

systemMessagesManager.systemMessagesStream.listen((cubeMessage) => parseCallMessage(cubeMessage));

parseCallMessage(CubeMessage cubeMessage) {
  final properties = cubeMessage.properties;
  if(properties.containsKey("callStart")) {
    String roomId = properties["janusRoomId"];
    List<int> participantIds = properties["participantIds"].split(',').map((id) => int.parse(id)).toList();
    if(this._roomId == null) {
      this._roomId = roomId;
      this._initiatorId = cubeMessage.senderId;
      this._participantIds = participantIds;
//    handleNewCall();
  } else if(properties.containsKey("callRejected")) {
    String roomId = properties["janusRoomId"];
    bool isBusy = properties["busy"] == 'true';
    if(this._roomId == roomId) {
//    handleRejectCall();
  } else if(properties.containsKey("callEnd")) {
    String roomId = properties["janusRoomId"];
    if(this._roomId == roomId) {
//    handleEndCall();

Adding user to call

For adding user to current call you can send invite message with current roomId and participantIds:

sendCallMessage(String roomId, List<int> participantIds) {
 List<CubeMessage> callMsgList = _buildCallMessages(roomId, participantIds);
 callMsgList.forEach((callMsg) {
   callMsg.properties["callStart"] = '1';
   callMsg.properties["participantIds"] = participantIds.join(',');
 callMsgList.forEach((msg) => systemMessagesManager.sendSystemMessage(msg));

And then on the receiver side when the new user successfully joins the room, he automatically subscribes to all active participants in current call (at the same time, other participants will receive onPublishersReceived and will be able to subscribe to that new user).

Retrieve meetings

Retrieve a meeting by id:

getMeetings({'_id': meetingId})
  .then((meetings) {})
  .catchError((onError) {});          

Retrieve a list of meetings:

  .then((meetings) {})
  .catchError((onError) {});

or use the getMeetingsPaged for requesting meeting using the pagination feature:

getMeetingsPaged(limit: 10, skip: 20, params: {'scheduled': true})
  .then((result) async {
    var meetings = result.items;
  .catchError((onError) {});

Edit meeting

A meeting creator can edit a meeting:

CubeMeeting originalMeeting; //some meeting which was created before, should cantain `meetingId` 
originalMeeting.name = updatedName;
originalMeeting.startDate = updatedStartDate;
originalMeeting.endDate = updatedEndDate;
originalMeeting.attendees = [
    userId: 125, email: 'test@email.com'),

  .then((updatedMeeting) {})

or use the method updateMeetingById for updating only some fields of the meeting model:

updateMeetingById(meetingId, {'record': true})
  .then((updatedMeeting) {})

Delete meeting

A meeting creator can delete a meeting:

  .then((voidResult) {})
  .catchError((onError) {});


Server-side recording is available. Read more about Recording feature https://connectycube.com/2021/02/23/connectycube-releases-server-side-calls-recording-along-with-new-meetings-api/