Skip to content
Add AI Denoising to your Video Calls using the Agora React Native UIKit featured

Add AI Denoising to your Video Calls using the Agora React Native UIKit

By Author: Ekaansh Arora In Developer

The React Native UIKit makes it easy to build your own video calling app in minutes. You can find out more about it here. In this blog post, we’ll take a look at how we can extend the UIKit and add custom features to it using the example of AI denoising.

Prerequisites

  • An Agora developer account (It’s free, sign up here!)
  • Node.js LTS release
  • An iOS or Android device for testing
  • A high-level understanding of React Native development

Setup

You can get the code for the example on GitHub, or you can create your own React Native project. Open a terminal and execute:

npx react-native init demo --template react-native-template-typescript
cd demo

Install the Agora React Native SDK and UIKit:

npm i react-native-agora agora-rn-uikit

At the time of writing this post, the current agora-rn-uikit release is v3.3.0 and the current react-native-agora release is v3.5.1

If you’re using an iOS device, you’ll need to run cd ios && pod install to install CocoaPods. You’ll also need to configure app signing and permissions. You can do this by opening the /ios/<projectname>.xcworkspace file in Xcode.

That’s the setup. You can now execute npm run android or npm run ios to start the server and see the bare-bones React Native app.

Building the Video Call

The UIKit gives you access to a high-level component called <AgoraUIKit> that can be used to render a full video call. The UIKit blog has an in-depth discussion on how you can customize the UI and features without writing much code. The <AgoraUIKit> component is built with smaller components that can also be used to build a fully custom experience without worrying about the video call logic.

We’ll clear out the App.tsx file and start fresh:

We’ll create a state variable called inCall. When it’s true we’ll render our video call, and when it’s false we’ll render an empty <View> for now:

To build our video call, we’ll import the PropsContext, RtcConfigure, and GridVideo components from the UIKit. The RtcConfigure component handles the logic of the video call. We’ll wrap it with PropsContext to pass in the user props to the UIKit.

We’ll then render our <GridVideo> component, which will display all the user videos in a grid. You can use the <PinnedVideo> component instead. Because we’ll want to create a button to enable and disable AI denoising, we’ll create a custom component called <Controls>, which we’ll render below our grid:

We can use the LocalAudioMute, LocalVideoMute, SwitchCamera, and Endcall buttons from the UIKit and render them inside a <View>.

We’ll create a new component called CustomButton, which will contain the code to enable and disable our denoising feature:

We can access the RtcEngine instance using the RtcContext. This gives us access to the engine instance exposed by the Agora SDK that’s used by the UIKit. We’ll define a state enabled that will toggle the denoising effect. We’ll create a button using <TouchableOpacity> that will call the enableDeepLearningDenoise method on our engine instance based on our state. And we’ll add an image icon to show the status.

That’s all we need to do to add a custom feature. You can even add event listeners in the same fashion to access engine events and perform custom operations.

Conclusion

If there are features you think would be good to add to Agora UIKit for React Native that many users would benefit from, feel free to fork the repository and add a pull request. Or open an issue on the repository with the feature request. All contributions are appreciated!