Live audio streaming is increasingly popular across a wide range of uses, from live podcasts and interviews to live music performances. The possibilities are endless once you have a few users engaging with an audience in real time.
There’s an easy way to accomplish live audio streaming using the Agora React Native SDK. In this tutorial, we’ll walk through building a live audio broadcasting app that can have multiple broadcasters and host thousands of users by utilizing the Agora Audio SDK. We’ll go over the structure, setup, and execution of the app before diving into the code. The open-source code is available here.
We’ll be using Agora RTC SDK for React Native for the example. I’m using v3.2.2 at the time of writing.
Creating an Agora Account
Sign up at Agora Console and log in to the dashboard.
Navigate to the Project List tab under the Project Management tab and create a project by clicking the blue Create button.
Create a project and retrieve the App ID. (When prompted to use App ID + Certificate, select only App ID.) The App ID will be used to authorize your requests while you’re developing the application, without generating tokens.
Note: This guide does not implement token authentication which is recommended for all RTE apps running in production environments. For more information about token based authentication within the Agora platform please refer to this guide.
Structure of Our Example
This is the structure of the application:
. ├── android ├── components │ └── Permission.ts │ └── Style.ts ├── ios ├── App.tsx ├── index.js .
Running the App
You’ll need to have a recent version of Node.js and NPM installed:
- Make sure you’ve set up an Agora account, set up a project, and generated an App ID (as discussed above).
- Download and extract the ZIP file from the master branch.
npm installto install the app dependencies in the unzipped directory.
- Navigate to
./App.tsxand enter the App ID as
`appId: YourAppIdHere`in the state declaration.
- If you’re building for iOS, open a terminal and execute
cd ios && pod install.
- Connect your device, and run
npx react-native run-android/
npx react-native run-iosto start the app. Give it a few minutes to build the app and install it on your device.
- Once you see the home screen on your mobile device (or emulator), click the start call button on the device.
That’s it. You should have an audio broadcast going between the two devices.
The app uses
channel-x as the channel name.
Before we dive into the code, let’s get a few basics out of the way:
- We’ll use the Agora RTC (Real-time Communication) SDK to connect to a channel and join an audio call.
- We can have multiple users broadcasting to a channel. All users as an audience on that channel can listen to the broadcasters.
- The audience can dynamically switch to a broadcaster role.
- The Agora RTC SDK uses unique IDs (UIDs) for each user. To associate these UIDs with a username, we’ll use the Agora RTM (Real-Time Messaging) SDK to signal the username to others on the call. We’ll discuss how it’s done below.
Let’s take a look at how the code works:
App.tsx will be the entry point into the app. We’ll have all our code in this file. When you open the app, there will be a username field with three buttons: to join the call, end the call, and toggle our user role between broadcaster and audience.
We start by writing the used import statements. Next, we define an interface for our application state containing the following:
appId: our Agora App ID
token: token generated to join the channel
isHost: boolean value to switch between audience and broadcaster
channelName: name for the channel
joinSucceed: boolean value to store if we’ve connected successfully
rtcUid: local user’s UID on joining the RTC channel
myUsername: local user’s name to log in to RTM
usernames: a dictionary associating RTC UIDs of remote users to their usernames that we’ll get using RTM
peerIds: an array to store the UIDs of other users in the channel
We define a class-based component: the _rtcEngine variable will store the instance of the RtcEngine class, and the _rtmEngine variable will store the instance of the RtmEngine class, which we can use to access the SDK functions.
In the constructor, we set our state variables and request permission for recording audio on Android. (We use a helper function from permission.ts, as described below.) When the component is mounted, we call the initRTC and initRTM functions, which initialize the RTC and RTM engines using the App ID. When the component unmounts, we destroy our engine instances.
We use the App ID to create our engine instance. Next, we set channelProfile to Live Broadcasting and clientRole based on our isHost state variable value.
The RTC triggers a userJoined event for each user present when we join the channel and for each new user who joins later. The userOffline event is triggered when a user leaves the channel. We use event listeners to sync our peerIds array.
Note: Audience members don’t trigger the userJoined/userOffline event.
We’re using RTM to send our username to other usernames on the call. And this is how we associate our usernames with our RTC UID.
- When a user joins a channel, we send a message to all channel members as
- On receiving a channel message, all users add the key-value pair to their username dictionary.
- When a new user joins, all members on the channel send a peer message to that user in the same schema
- On receiving peer messages, we do the same (add the key-value pair to the dictionary) and update our usernames.
Following our plan, we attach event listeners with functions to populate and update usernames on
channelMessageReceived (broadcast message to channel),
messageReceived (peer message), and
channelMemberJoined events. We also create a client on the engine using the same App ID.
Functions for Our Buttons
toggleRole function updates the state and calls the
setClientRole function with the correct argument based on the state.
startCall function checks if a username is entered. It then joins the RTC channel. It also logs in to RTM, joins the channel, and sends the channel message for the username, as we discussed before.
endCall function leaves the RTC channel, sends a message that is used to remove the username from our remote users dictionary, and then leaves and logs out of RTM.
Rendering Our UI
We define the render function for displaying buttons to start and end the call as well as to toggle roles. We define a function
_renderUsers that renders a list of all broadcasters and audience members.
We’re exporting a helper function to request microphone permissions from the Android OS.
The Style.ts file contains the styling for the components.
That’s how easy it is to build a live audio streaming app. You can refer to the Agora React Native API Reference for methods that can help you quickly add features like muting the mic, setting audio profiles, audio mixing and much more.
Want to build Real-Time Engagement apps?
If you have questions, please call us at 408-879-5885. We’d be happy to help you add voice or video chat, streaming and messaging into your apps.
Stay inspired by accessing all RTE2020 session recordings. Gain access to innovative Real-Time-Engagement content and start innovating today.