Update 20-March-22: The blog has been updated to work with v4.0.0 of the Agora React Native UIKit.
The React Native UIKit makes it easy to build your own video calling app in minutes. You can find out more about it here. In this blog post, weāll take a look at how we can extend the UIKit and add custom features to it using the example of AI denoising.
Prerequisites
- An Agora developer account (Itās free, sign up here!)
- Node.js LTS release
- An iOS or Android device for testing
- A high-level understanding of React Native development
Setup
You can get the code for the example on GitHub, or you can create your own React Native project. Open a terminal and execute:
npx react-native init demo --template react-native-template-typescript
cd demo
Install the Agora React Native SDKs and UIKit:
npm i react-native-agora agora-react-native-rtm agora-rn-uikit
At the time of writing this post, the current agora-rn-uikit
release is v4.0.0 the react-native-agora
release is v3.7.0 and agora-react-native-rtm
is v1.5.0
If youāre using an iOS device, youāll need to run cd ios && pod install
to install CocoaPods. Youāll also need to configure app signing and permissions. You can do this by opening the /ios/.xcworkspace
file in Xcode.
Thatās the setup. You can now execute npm run android
or npm run ios
to start the server and see the bare-bones React Native app.
Building the Video Call
The UIKit gives you access to a high-level component calledĀ <AgoraUIKit>
that can be used to render a full video call. TheĀ UIKit blogĀ has an in-depth discussion on how you can customize the UI and features without writing much code. TheĀ <AgoraUIKit>
Ā component is built with smaller components that can also be used to build a fully custom experience without worrying about the video call logic.
Weāll clear out the App.tsx
file and start fresh:
Weāll create a state variable calledĀ inCall
. When itās true weāll render our video call, and when itās false weāll render an emptyĀ <View>
Ā for now:
To build our video call, weāll import theĀ PropsContext
,Ā RtcConfigure
, andĀ GridVideo
Ā components from the UIKit. TheĀ RtcConfigure
Ā component handles the logic of the video call. Weāll wrap it withĀ PropsContext
Ā to pass in the user props to the UIKit.
Weāll then render ourĀ <GridVideo>
Ā component, which will display all the user videos in a grid. You can use theĀ <PinnedVideo>
Ā component instead. Because weāll want to create a button to enable and disable AI denoising, weāll create a custom component calledĀ <Controls>
, which weāll render below our grid:
We can use theĀ LocalAudioMute
,Ā LocalVideoMute
,Ā SwitchCamera
, andĀ Endcall
Ā buttons from the UIKit and render them inside aĀ <View>
.
Weāll create a new component calledĀ CustomButton
, which will contain the code to enable and disable our denoising feature:
We can access theĀ RtcEngine
Ā instance using theĀ RtcContext
. This gives us access to the engine instance exposed by the Agora SDK thatās used by the UIKit. Weāll define a state enabled that will toggle the denoising effect. Weāll create a button usingĀ <TouchableOpacity>
Ā that will call theĀ enableDeepLearningDenoise
Ā method on our engine instance based on our state. And weāll add an image icon to show the status.
Thatās all we need to do to add a custom feature. You can even add event listeners in the same fashion to access engine events and perform custom operations.
Conclusion
If there are features you think would be good to add to Agora UIKit for React Native that many users would benefit from, feel free to fork the repository and add a pull request. Or open an issue on the repository with the feature request. All contributions are appreciated!
Other Resources
For more information about building applications using Agora SDKs, take a look at theĀ Agora Video Call Quickstart GuideĀ andĀ Agora API Reference. You can also take a look at the UIKitĀ GitHub Repo,Ā API Reference, andĀ Wiki.
And I invite you to join theĀ Agora Developer Slack community. Feel free to ask any questions about the UIKit in theĀ #react-native-help-me
Ā channel.