Since the release of v3.0.0 of Agora’s SDK for React-Native users can now join an unlimited number of channels at the same time. But you can publish your own camera feed to only one channel at a time.
This ability can be really handy in the case of multiple breakout rooms, where you can both send and receive video from a primary room while also receiving videos from secondary rooms.
We’ll be using the Agora RTC SDK for React Native for our example.
Before diving into how it works, let’s look at a few key points
- We’ll use the SDK to connect to the first channel and join a video call normally. We’ll be streaming our video as well as receiving video from other users on the channel.
- Next, we’ll join a second channel to receive video streams from all the users on that channel. Note that users on channel 2 will not be able to receive our video.
- The two channels are separate: users on channel 1 and channel 2 don’t see each other. We can extend this functionality to join as many channels as required.
Structure of our example
This is the structure of the application:
│ └── Permission.ts
│ └── Style.ts
Download the source
If you want to jump to the code and try it out for yourself, you can look at the readme for steps on how to run the app. The code is open source and available on GitHub. The app uses channel-1 and channel-2 as the channel names.
When you run the app, you’ll see two buttons: one to join and one to end the call. When you click start call, you should see your video in the top row, which contains videos from channel 1. The bottom row contains videos from channel 2.
Note: This guide does not implement token authentication which is recommended for all RTE apps running in production environments. For more information about token based authentication within the Agora platform please refer to this guide: https://docs.agora.io/en/Video/token?platform=All%20Platforms
How the App Works
App.tsx will be the entry point into the app. We’ll have all our code in this file:
We start by writing the import statements. Next, we define an interface for our application state containing the following:
appId: Our Agora App ID
token: Token generated to join the channel
channelNameOne: Name for channel 1
channelNameTwo: Name for channel 2
joinSucceed: Boolean value to store if we’ve connected successfully
peerIdsOne: Array to store the UIDs of other users in channel 1
peerIdsTwo: Array to store the UIDs of other users in channel 2
We define a class-based component: the _rtcEngine variable will store the instance of the RtcEngine class, and the _channel variable will store the instance of the RtcChannel class, which we can use to access the SDK functions.
In the constructor, we set our state variables and request permission for recording audio on Android. (We use a helper function from permission.ts, as described below.) When the component is mounted, we call the init function, which initializes the RTC engine and RTC channel. When the component unmounts, we destroy our engine and channel instances.
We use the App ID to create our engine instance. The engine instance will be used to connect to channel 1, where we both send and receive the video. We also create our channel instance using the name of our second channel. The channel instance will be used only to receive videos from channel 2.
The RTC triggers a userJoined event for each user present when we join the channel and for each new user who joins after. The userOffline event is triggered when a user leaves the channel. We use event listeners on _engine and _channel to store and maintain our peerIdsOne and peerIdsTwo arrays containing the UIDs for users on both the channels.
We also attach a listener for joinChannelSuccess to update our state variable which is used to render our UI while we’re in the call.
Functions for our buttons
startCall function joins both the channels using the joinChannel method.
endCall function leaves both the channels using the leaveChannel method and updates the state.
destroy function destroys the instances of our engine and channel.
Rendering our UI
We define the render function for displaying buttons to start and end the call and to display user videos from both channels.
We define a _renderVideos function to render the videos from both our channels using the — _renderRemoteVideosOne and _renderRemoteVideosTwo functions for channel 1 and channel 2. Each function contains scrollViews to hold videos from the channel. We use the UIDs stored in peerId arrays to render remote users’ videos by passing them to the RtcRemoteView.SurfaceView component.
We’re exporting a helper function to request microphone permissions from the Android OS.
Style.ts file contains the styling for the components.
That’s how we can build a video call app that can connect to two channels simultaneously. You can refer to the Agora React Native API Reference to see methods that can help you quickly add many features like muting the mic, setting audio profiles and audio mixing.
Want to build Real-Time Engagement apps?
If you have questions, please call us at 408-879-5885. We’d be happy to help you add voice or video chat, streaming and messaging into your apps.